Laravel 9 Sail error service "laravel.test" is not running container - laravel

I have cloned a repository from GitHub, a Laravel project that already has Sail.
Then in order to install composer dependencies, I ran:
docker run --rm \
-u "$(id -u):$(id -g)" \
-v "$(pwd):/var/www/html" \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
After that, I ran sail up.
All images pulled and built.
Now I have access to the project and its route through the browser, even I can use sail mysql commands. However the problem is that when I run sail artisan commands, this message shows up:
service "laravel.test" is not running container #1.
I am using windows and also wsl2 which is using ubuntu 20 as the default Linux distribution.
tip: In another fresh project I do not have any problem with Sail.
I did these things before, but they didn't solve the problem:
adding APP_SERVICE=laravel.test to .env.
running composer update.
To clarify my question I will add more codes below.
docker-compose.yml:
version: "3.7"
services:
#Laravel App
app:
build:
context: ./docker/php/${DOCKER_PHP_VERSION}
dockerfile: Dockerfile
args:
xdebug_enabled: ${DOCKER_PHP_XDEBUG_ENABLED}
image: ${COMPOSE_PROJECT_NAME}-app
restart: unless-stopped
tty: true
working_dir: /var/www/html
environment:
XDEBUG_MODE: '${DOCKER_PHP_XDEBUG_MODE:-off}'
volumes:
- ./:/var/www/html
networks:
- app_network
depends_on:
- mysql
- redis
- meilisearch
- minio
nginx:
image: nginx:alpine
restart: unless-stopped
tty: true
ports:
- '${DOCKER_NGINX_PORT:-80}:80'
volumes:
- ./:/var/www/html
- ./docker/nginx/dev/:/etc/nginx/conf.d/
networks:
- app_network
depends_on:
- app
# S3 Development
minio:
image: 'minio/minio:latest'
ports:
- '${DOCKER_MINIO_PORT:-9000}:9000'
- '${DOCKER_MINIO_CONSOLE_PORT:-8900}:8900'
environment:
MINIO_ROOT_USER: 'laravel'
MINIO_ROOT_PASSWORD: 'password'
volumes:
- 'appminio:/data/minio'
networks:
- app_network
command: minio server /data/minio --console-address ":8900"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
retries: 3
timeout: 5s
# Laravel Scout Search Provider
meilisearch:
image: 'getmeili/meilisearch:latest'
platform: linux/x86_64
environment:
- PUID=${DOCKER_PUID:-1000}
- PGID=${DOCKER_PGID:-1000}
- TZ=${DOCKER_TZ:-Australia/Brisbane}
restart: unless-stopped
ports:
- '${DOCKER_MEILISEARCH_PORT:-7700}:7700'
volumes:
- 'appmeilisearch:/data.ms'
networks:
- app_network
# Database
mysql:
image: 'mysql/mysql-server:8.0'
command: --default-authentication-plugin=mysql_native_password
ports:
- '${DOCKER_MYSQL_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD:-abc123}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE:-laravel}'
MYSQL_USER: '${DB_USERNAME:-laravel}'
MYSQL_PASSWORD: '${DB_PASSWORD:-abc123}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
restart: unless-stopped
volumes:
- 'appmysql:/var/lib/mysql'
networks:
- app_network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
# Debug emails sent from the app
mailcatcher:
restart: unless-stopped
image: dockage/mailcatcher
environment:
- PUID=${DOCKER_PUID:-1000}
- PGID=${DOCKER_PGID:-1000}
- TZ=${DOCKER_TZ:-Australia/Brisbane}
ports:
- "${DOCKER_MAILCATCHER_WEB_PORT:-1080}:1080"
- "${DOCKER_MAILCATCHER_SMTP_PORT:-1025}:1025"
networks:
- app_network
# Redis Database
redis:
healthcheck:
test: [ "CMD", "redis-cli", "ping" ]
interval: 1m
timeout: 10s
retries: 3
start_period: 30s
image: redis
restart: unless-stopped
volumes:
- 'appredis:/data'
environment:
- PUID=${DOCKER_PUID:-1000}
- PGID=${DOCKER_PGID:-1000}
- TZ=${DOCKER_TZ:-Australia/Brisbane}
ports:
- ${DOCKER_REDIS_PORT:-6379}:6379
networks:
- app_network
volumes:
appredis:
driver: local
appmysql:
driver: local
appmeilisearch:
driver: local
appminio:
driver: local
networks:
app_network:
driver: bridge
.env:
APP_NAME="Boilerplate"
APP_ENV=local
APP_KEY=base64:vnhPCkeEz8MOUqKv7dYsZvTluoB3bra/aH+MONTUM9I=
APP_DEBUG=true
APP_URL=http://127.0.0.1:8000
FRONTEND_URL=http://127.0.0.1:8000
EMAIL_VERIFICATION_REQUIRED=true
TOKEN_ON_REGISTER=false
LOG_CHANNEL=stack
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=safe_proud
DB_USERNAME=sail
DB_PASSWORD=password
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=database
SESSION_DRIVER=file
SESSION_LIFETIME=120
SESSION_CONNECTION=localhost
REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=mailcatcher
MAIL_PORT=1025
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
MAIL_FROM_ADDRESS=developers#presentcompany.co
MAIL_FROM_NAME="${APP_NAME}"
SCOUT_DRIVER=meilisearch
MEILISEARCH_HOST=http://127.0.0.1:7700
MEILISEARCH_KEY=masterKey
#FILESYSTEM_DRIVER=s3
AWS_ACCESS_KEY_ID=laravel
AWS_SECRET_ACCESS_KEY=password
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=store
AWS_ENDPOINT=http://s3:9000
AWS_USE_PATH_STYLE_ENDPOINT=true
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
DOCKER_PUID=1000
DOCKER_PGID=1000
DOCKER_TZ=Australia/Brisbane
DOCKER_NGINX_PORT=8000
DOCKER_REDIS_PORT=6379
DOCKER_MAILCATCHER_WEB_PORT=1080
DOCKER_MAILCATCHER_SMTP_PORT=1025
DOCKER_MEILISEARCH_PORT=7700
DOCKER_MYSQL_PORT=3306
DOCKER_MINIO_PORT=9000
DOCKER_MINIO_CONSOLE_PORT=8900
COMPOSE_PROJECT_NAME=boilerplate
DOCKER_PHP_VERSION=8.1
DOCKER_PHP_XDEBUG_ENABLED=false
DOCKER_PHP_XDEBUG_MODE=develop,debug
composer.json:
{
"name": "laravel/laravel",
"type": "project",
"description": "Safe Proud API",
"keywords": ["framework", "laravel"],
"license": "MIT",
"require": {
"php": "^8.0|^8.1|^8.2",
"ext-curl": "*",
"ext-json": "*",
"aws/aws-sdk-php": "^3.144",
"balping/laravel-hashslug": "^2.2",
"bolechen/nova-activitylog": "^v0.3.0",
"classic-o/nova-media-library": "^1.0",
"cloudcake/nova-snowball": "^1.2",
"dcblogdev/laravel-sent-emails": "^2.0",
"emilianotisato/nova-tinymce": "^1",
"eminiarts/nova-tabs": "^1.5",
"guzzlehttp/guzzle": "^7.2",
"johnathan/nova-trumbowyg": "^1.0",
"kutia-software-company/larafirebase": "^1.3",
"laravel/framework": "^9.19",
"laravel/nova": "*",
"laravel/sanctum": "^3.0",
"laravel/scout": "^9.4",
"laravel/tinker": "^2.7",
"laravel/vapor-cli": "^1.13",
"laravel/vapor-core": "^2.22",
"laravel/vapor-ui": "^1.5",
"league/flysystem-aws-s3-v3": "~3.0",
"mpociot/versionable": "^4.3",
"nnjeim/world": "^1.1",
"optimistdigital/nova-page-manager": "^3.1",
"outl1ne/nova-settings": "^3.5",
"spatie/laravel-activitylog": "^4.5",
"spatie/laravel-permission": "^5.0",
"vinkla/hashids": "^10.0",
"vyuldashev/nova-permission": "^3.1",
"whitecube/nova-flexible-content": "^0.2.6",
"yab/laravel-scout-mysql-driver": "^5.1"
},
"require-dev": {
"fakerphp/faker": "^1.9.1",
"laravel/pint": "^1.1",
"laravel/sail": "^1.0.1",
"mockery/mockery": "^1.4.4",
"nunomaduro/collision": "^6.1",
"phpunit/phpunit": "^9.5.10",
"spatie/laravel-ignition": "^1.0"
},
"repositories": [
{
"type": "path",
"url": "./nova"
}
],
"autoload": {
"psr-4": {
"App\\": "app/",
"Database\\Factories\\": "database/factories/",
"Database\\Seeders\\": "database/seeders/"
},
"files": [
"app/Http/helpers.php"
]
},
"autoload-dev": {
"psr-4": {
"Tests\\": "tests/"
}
},
"scripts": {
"post-autoload-dump": [
"Illuminate\\Foundation\\ComposerScripts::postAutoloadDump",
"#php artisan package:discover --ansi"
],
"post-update-cmd": [
"#php artisan vendor:publish --tag=laravel-assets --ansi --force",
"#php artisan vapor-ui:publish --ansi"
],
"post-root-package-install": [
"#php -r \"file_exists('.env') || copy('.env.example', '.env');\""
],
"post-create-project-cmd": [
"#php artisan key:generate --ansi"
]
},
"extra": {
"laravel": {
"dont-discover": []
}
},
"config": {
"optimize-autoloader": true,
"preferred-install": "dist",
"sort-packages": true,
"allow-plugins": {
"pestphp/pest-plugin": true
}
},
"minimum-stability": "dev",
"prefer-stable": true
}
Also, when I run sail up:
[+] Running 7/0
⠿ Container boilerplate-meilisearch-1 Created 0.0s
⠿ Container boilerplate-redis-1 Created 0.0s
⠿ Container boilerplate-minio-1 Created 0.0s
⠿ Container boilerplate-mysql-1 Created 0.0s
⠿ Container boilerplate-mailcatcher-1 Created 0.0s
⠿ Container boilerplate-app-1 Created 0.0s
⠿ Container boilerplate-nginx-1 Created 0.0s
Attaching to boilerplate-app-1, boilerplate-mailcatcher-1, boilerplate-meilisearch-1, boilerplate-minio-1, boilerplate-mysql-1, boilerplate-nginx-1, boilerplate-redis-1
boilerplate-redis-1 | 1:C 18 Nov 2022 20:44:41.772 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
boilerplate-redis-1 | 1:C 18 Nov 2022 20:44:41.772 # Redis version=7.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
boilerplate-redis-1 | 1:C 18 Nov 2022 20:44:41.772 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.774 * monotonic clock: POSIX clock_gettime
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.775 * Running mode=standalone, port=6379.
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.775 # Server initialized
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.775 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * Loading RDB produced by version 7.0.5
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * RDB age 53 seconds
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * RDB memory usage when created 0.85 Mb
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * Done loading RDB, keys loaded: 0, keys expired: 0.
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * DB loaded from disk: 0.000 seconds
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * Ready to accept connections
boilerplate-mailcatcher-1 | Starting MailCatcher v0.8.2
boilerplate-mailcatcher-1 | ==> smtp://0.0.0.0:1025
boilerplate-mysql-1 | [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | 888b d888 d8b 888 d8b
888
boilerplate-meilisearch-1 | 8888b d8888 Y8P 888 Y8P
888
boilerplate-meilisearch-1 | 88888b.d88888 888
888
boilerplate-meilisearch-1 | 888Y88888P888 .d88b. 888 888 888 .d8888b .d88b. 8888b. 888d888 .d8888b 88888b.
boilerplate-meilisearch-1 | 888 Y888P 888 d8P Y8b 888 888 888 88K d8P Y8b "88b 888P" d88P" 888 "88b
boilerplate-meilisearch-1 | 888 Y8P 888 88888888 888 888 888 "Y8888b. 88888888 .d888888 888 888 888 888
boilerplate-meilisearch-1 | 888 " 888 Y8b. 888 888 888 X88 Y8b. 888 888 888 Y88b. 888 888
boilerplate-meilisearch-1 | 888 888 "Y8888 888 888 888 88888P' "Y8888 "Y888888 888 "Y8888P 888 888
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | Database path: "./data.ms"
boilerplate-meilisearch-1 | Server listening on: "http://0.0.0.0:7700"
boilerplate-meilisearch-1 | Environment: "development"
boilerplate-meilisearch-1 | Commit SHA: "unknown"
boilerplate-meilisearch-1 | Commit date: "unknown"
boilerplate-meilisearch-1 | Package version: "0.29.1"
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | Thank you for using Meilisearch!
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | We collect anonymized analytics to improve our product and your experience. To learn more, including how to turn off analytics, visit our dedicated documentation page: https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | Anonymous telemetry: "Enabled"
boilerplate-meilisearch-1 | Instance UID: "1fa4148e-c0dc-46c7-9f61-f4abb8f0354c"
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | No master key found; The server will accept unidentified requests. If you need some protection in development mode, please export a key: export MEILI_MASTER_KEY=xxx
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | Documentation: https://docs.meilisearch.com
boilerplate-meilisearch-1 | Source code: https://github.com/meilisearch/meilisearch
boilerplate-meilisearch-1 | Contact: https://docs.meilisearch.com/resources/contact.html
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | [2022-11-18T10:44:42Z INFO actix_server::builder] Starting 4 workers
boilerplate-meilisearch-1 | [2022-11-18T10:44:42Z INFO actix_server::server] Actix runtime found; starting in Actix runtime
boilerplate-mailcatcher-1 | ==> http://0.0.0.0:1080
boilerplate-mysql-1 | [Entrypoint] Starting MySQL 8.0.31-1.2.10-server
boilerplate-minio-1 | Warning: Default parity set to 0. This can lead to data loss.
boilerplate-minio-1 | MinIO Object Storage Server
boilerplate-minio-1 | Copyright: 2015-2022 MinIO, Inc.
boilerplate-minio-1 | License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
boilerplate-minio-1 | Version: RELEASE.2022-11-11T03-44-20Z (go1.19.3 linux/amd64)
boilerplate-minio-1 |
boilerplate-minio-1 | Status: 1 Online, 0 Offline.
boilerplate-minio-1 | API: http://172.19.0.5:9000 http://127.0.0.1:9000
boilerplate-minio-1 | Console: http://172.19.0.5:8900 http://127.0.0.1:8900
boilerplate-minio-1 |
boilerplate-minio-1 | Documentation: https://min.io/docs/minio/linux/index.html
boilerplate-mysql-1 | 2022-11-18T10:44:42.878570Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
boilerplate-mysql-1 | 2022-11-18T10:44:42.879744Z 0 [Warning] [MY-010918] [Server] 'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead.
boilerplate-mysql-1 | 2022-11-18T10:44:42.879769Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1
boilerplate-mysql-1 | 2022-11-18T10:44:42.886273Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
boilerplate-mysql-1 | 2022-11-18T10:44:43.007089Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
boilerplate-app-1 | Installing Package Dependencies
boilerplate-app-1 | Installing dependencies from lock file (including require-dev)
boilerplate-app-1 | Verifying lock file contents can be installed on current platform.
boilerplate-app-1 | Nothing to install, update or remove
boilerplate-mysql-1 | 2022-11-18T10:44:43.275853Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
boilerplate-mysql-1 | 2022-11-18T10:44:43.275917Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
boilerplate-app-1 | Package gregoriohc/laravel-nova-theme-responsive is abandoned, you should avoid using it. No replacement was suggested.
boilerplate-app-1 | Generating optimized autoload files
boilerplate-mysql-1 | 2022-11-18T10:44:43.316329Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
boilerplate-mysql-1 | 2022-11-18T10:44:43.316463Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.31' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server - GPL.
boilerplate-nginx-1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
boilerplate-nginx-1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
boilerplate-nginx-1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
boilerplate-nginx-1 | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
boilerplate-nginx-1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
boilerplate-nginx-1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
boilerplate-nginx-1 | /docker-entrypoint.sh: Configuration complete; ready for start up
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: using the "epoll" event method
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: nginx/1.23.2
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: built by gcc 11.2.1 20220219 (Alpine 11.2.1_git20220219)
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: OS: Linux 5.10.102.1-microsoft-standard-WSL2
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker processes
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 20
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 21
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 22
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 23
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 24
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 25
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 26
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 27
boilerplate-minio-1 |
boilerplate-minio-1 | You are running an older version of MinIO released 6 days ago
boilerplate-minio-1 | Update: Run `mc admin update`
boilerplate-minio-1 |
boilerplate-minio-1 |
boilerplate-app-1 | Class App\Http\Resources\Api\v1\AddressResource located in ./app/Http/Resources/Api/V1/AddressResource.php does not comply with psr-4 autoloading standard. Skipping.
boilerplate-app-1 | > Illuminate\Foundation\ComposerScripts::postAutoloadDump
boilerplate-app-1 | > #php artisan package:discover --ansi
boilerplate-app-1 |
boilerplate-app-1 | INFO Discovering packages.
boilerplate-app-1 |
boilerplate-app-1 | bolechen/nova-activitylog ............................................. DONE
boilerplate-app-1 | classic-o/nova-media-library .......................................... DONE
boilerplate-app-1 | cloudcake/nova-fixed-bars ............................................. DONE
boilerplate-app-1 | cloudcake/nova-snowball ............................................... DONE
boilerplate-app-1 | dcblogdev/laravel-sent-emails ......................................... DONE
boilerplate-app-1 | emilianotisato/nova-tinymce ........................................... DONE
boilerplate-app-1 | eminiarts/nova-tabs ................................................... DONE
boilerplate-app-1 | gregoriohc/laravel-nova-theme-responsive .............................. DONE
boilerplate-app-1 | intervention/image .................................................... DONE
boilerplate-app-1 | johnathan/nova-trumbowyg .............................................. DONE
boilerplate-app-1 | kutia-software-company/larafirebase ................................... DONE
boilerplate-app-1 | laravel/nova .......................................................... DONE
boilerplate-app-1 | laravel/sail .......................................................... DONE
boilerplate-app-1 | laravel/sanctum ....................................................... DONE
boilerplate-app-1 | laravel/scout ......................................................... DONE
boilerplate-app-1 | laravel/tinker ........................................................ DONE
boilerplate-app-1 | laravel/ui ............................................................ DONE
boilerplate-app-1 | laravel/vapor-core .................................................... DONE
boilerplate-app-1 | laravel/vapor-ui ...................................................... DONE
boilerplate-app-1 | mpociot/versionable ................................................... DONE
boilerplate-app-1 | nesbot/carbon ......................................................... DONE
boilerplate-app-1 | nnjeim/world .......................................................... DONE
boilerplate-app-1 | nunomaduro/collision .................................................. DONE
boilerplate-app-1 | nunomaduro/termwind ................................................... DONE
boilerplate-app-1 | optimistdigital/nova-locale-field ..................................... DONE
boilerplate-app-1 | optimistdigital/nova-page-manager ..................................... DONE
boilerplate-app-1 | optimistdigital/nova-translations-loader .............................. DONE
boilerplate-app-1 | outl1ne/nova-settings ................................................. DONE
boilerplate-app-1 | spatie/laravel-activitylog ............................................ DONE
boilerplate-app-1 | spatie/laravel-ignition ............................................... DONE
boilerplate-app-1 | spatie/laravel-permission ............................................. DONE
boilerplate-app-1 | vinkla/hashids ........................................................ DONE
boilerplate-app-1 | vyuldashev/nova-permission ............................................ DONE
boilerplate-app-1 | whitecube/nova-flexible-content ....................................... DONE
boilerplate-app-1 | yab/laravel-scout-mysql-driver ........................................ DONE
boilerplate-app-1 |
boilerplate-app-1 | 100 packages you are using are looking for funding.
boilerplate-app-1 | Use the `composer fund` command to find out more!
boilerplate-app-1 | Running database migrations
boilerplate-app-1 |
boilerplate-app-1 | INFO Nothing to migrate.
boilerplate-app-1 |
boilerplate-app-1 | Linking Storage
boilerplate-app-1 |
boilerplate-app-1 | ERROR The [public/storage] link already exists.
boilerplate-app-1 |
boilerplate-app-1 | Generating IDE Helper Stubs
boilerplate-app-1 |
boilerplate-app-1 | ERROR There are no commands defined in the "ide-helper" namespace.
boilerplate-app-1 |
boilerplate-app-1 |
boilerplate-app-1 | ERROR There are no commands defined in the "ide-helper" namespace.
boilerplate-app-1 |
boilerplate-app-1 | 2022-11-18 10:44:49,582 INFO Set uid to user 0 succeeded
boilerplate-app-1 | 2022-11-18 10:44:49,584 INFO supervisord started with pid 39
boilerplate-app-1 | 2022-11-18 10:44:50,586 INFO spawned: 'php' with pid 40
boilerplate-app-1 | [18-Nov-2022 10:44:50] NOTICE: fpm is running, pid 40
boilerplate-app-1 | [18-Nov-2022 10:44:50] NOTICE: ready to handle connections
boilerplate-app-1 | 2022-11-18 10:44:51,610 INFO success: php entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
boilerplate-nginx-1 | 172.19.0.1 - - [18/Nov/2022:10:45:13 +0000] "GET /admin/dashboard HTTP/1.1" 200 17616 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
boilerplate-app-1 | 172.19.0.8 - 18/Nov/2022:10:45:13 +0000 "GET /index.php" 200
boilerplate-nginx-1 | 172.19.0.1 - - [18/Nov/2022:10:45:16 +0000] "GET / HTTP/1.1" 200 17601 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
boilerplate-app-1 | 172.19.0.8 - 18/Nov/2022:10:45:16 +0000 "GET /index.php" 200

I found the solution.
After looking at this part of docker-compose.yml:
version: "3.7"
services:
#Laravel App
app:
build:
context: ./docker/php/${DOCKER_PHP_VERSION}
dockerfile: Dockerfile
args:
xdebug_enabled: ${DOCKER_PHP_XDEBUG_ENABLED}
image: ${COMPOSE_PROJECT_NAME}-app
restart: unless-stopped
tty: true
working_dir: /var/www/html
I realized that Laravel service has a different name which in this project is app.
So in .env file, I added this:
APP_SERVICE=app

Related

<AWS EKS / Fargate / Kubernetes> "Communications link failure" on container startup

I was testing on a kubernetes setup with AWS EKS on Fargate, and encountered an issue on the container startup.
It is a java application making use of hibernate. It seems it failed to connect to the MySQL server on startup, giving a "Communications link failure" error. The database server is running properly on AWS RDS, and the docker image can be run as expected in local.
I wonder if this is caused by the MySQL port 3306 not being configured properly on the container/node/service. Would like to see if you can spot out what the issue is and please don't hesitate to point out any mis-configuration, thank you very much.
Pod startup log
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.1.RELEASE)
2020-08-13 11:39:39.930 INFO 1 --- [ main] com.example.demo.DemoApplication : The following profiles are active: prod
2020-08-13 11:39:58.536 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFERRED mode.
...
......
2020-08-13 11:41:27.606 ERROR 1 --- [ task-1] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) [HikariCP-3.4.5.jar!/:na]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcEnvironmentInitiator.java:180) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:68) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:35) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.initiateService(StandardServiceRegistryImpl.java:101) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.createService(AbstractServiceRegistryImpl.java:263) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:237) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.id.factory.internal.DefaultIdentifierGeneratorFactory.injectServices(DefaultIdentifierGeneratorFactory.java:152) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.injectDependencies(AbstractServiceRegistryImpl.java:286) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:243) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.internal.InFlightMetadataCollectorImpl.<init>(InFlightMetadataCollectorImpl.java:176) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:118) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.metadata(EntityManagerFactoryBuilderImpl.java:1224) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1255) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:58) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:391) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_212]
...
......
Service
patricks-mbp:test patrick$ kubectl get services -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test NodePort 10.100.160.22 <none> 80:31176/TCP 4h57m
service.yaml
kind: Service
apiVersion: v1
metadata:
name: test
namespace: test
spec:
selector:
app: test
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 8080
Deployment
patricks-mbp:test patrick$ kubectl get deployments -n test
NAME READY UP-TO-DATE AVAILABLE AGE
test 0/1 1 0 4h42m
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: <image location>
ports:
- containerPort: 8080
resources: {}
Pods
patricks-mbp:test patrick$ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
test-8648f7959-4gdvm 1/1 Running 6 21m
patricks-mbp:test patrick$ kubectl describe pod test-8648f7959-4gdvm -n test
Name: test-8648f7959-4gdvm
Namespace: test
Priority: 2000001000
Priority Class Name: system-node-critical
Node: fargate-ip-192-168-123-170.ec2.internal/192.168.123.170
Start Time: Thu, 13 Aug 2020 21:29:07 +1000
Labels: app=test
eks.amazonaws.com/fargate-profile=fp-1a0330f1
pod-template-hash=8648f7959
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.123.170
IPs:
IP: 192.168.123.170
Controlled By: ReplicaSet/test-8648f7959
Containers:
test:
Container ID: containerd://a1517a13d66274e1d7f8efcea950d0fe3d944d1f7208d057494e208223a895a7
Image: <image location>
Image ID: <image ID>
Port: 8080/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 13 Aug 2020 21:48:07 +1000
Finished: Thu, 13 Aug 2020 21:50:28 +1000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 13 Aug 2020 21:43:04 +1000
Finished: Thu, 13 Aug 2020 21:45:22 +1000
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5hdzd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-5hdzd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5hdzd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> fargate-scheduler Successfully assigned test/test-8648f7959-4gdvm to fargate-ip-192-168-123-170.ec2.internal
Normal Pulling 21m kubelet, fargate-ip-192-168-123-170.ec2.internal Pulling image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal Pulled 21m kubelet, fargate-ip-192-168-123-170.ec2.internal Successfully pulled image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal Created 11m (x5 over 21m) kubelet, fargate-ip-192-168-123-170.ec2.internal Created container test
Normal Started 11m (x5 over 21m) kubelet, fargate-ip-192-168-123-170.ec2.internal Started container test
Normal Pulled 11m (x4 over 19m) kubelet, fargate-ip-192-168-123-170.ec2.internal Container image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2" already present on machine
Warning BackOff 11s (x27 over 17m) kubelet, fargate-ip-192-168-123-170.ec2.internal Back-off restarting failed container
Ingress
patricks-mbp:~ patrick$ kubectl describe ing -n test test
Name: test
Namespace: test
Address: <ALB public address>
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ test:80 (192.168.72.15:8080)
Annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internet-facing","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"name":"test","namespace":"test"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"test","servicePort":80},"path":"/"}]}}]}}
kubernetes.io/ingress.class: alb
Events: <none>
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: test
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: test
servicePort: 80
AWS ALB ingress controller
Permission for ALB ingress controller to communicate with cluster
-> similar to https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/rbac-role.yaml
Creation of Ingress Controller which uses ALB
-> similar to https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/alb-ingress-controller.yaml
To allow pod from Fargate to connect to RDS you need to open security group.
Find the security group ID of your Fargate service
In your RDS security group rules, instead of putting a CIDR in your source field, put the Fargate service security group ID. Port 3306

conda init bash doesn't work into github actions

I want to activate conda environment and install some packages in github actions. I tried source activate myenv and activate myenv but this step doesn't activate anything
upload_package(){
conda config --set anaconda_upload yes
apt-get update
apt-get install -y build-essential
conda init bash
conda create -n myenv python=3.6
conda activate myenv
echo $PWD
echo "$VIRTUAL_ENV"
conda install --yes pip
conda install --yes numpy cython
conda install --yes -c conda-forge nose mdtraj
anaconda login --username $INPUT_ANACONDAUSERNAME --password $INPUT_ANACONDAPASSWORD
echo $PWD
echo "$VIRTUAL_ENV"
conda build /github/workspace
anaconda logout
}
i tried to check with echo "$VIRTUAL_ENV" but it either gives empty screen (source activate myenv and activate myenv) or just gives an error below. I don't know how can i handle as I don't know how can I restart and close shell as it is on github actions. I would appreciate your help
2020-03-04T10:50:25.2340743Z + conda init bash
2020-03-04T10:50:25.3835655Z no change /opt/conda/condabin/conda
2020-03-04T10:50:25.3836003Z no change /opt/conda/bin/conda
2020-03-04T10:50:25.3836764Z no change /opt/conda/bin/conda-env
2020-03-04T10:50:25.3836919Z no change /opt/conda/bin/activate
2020-03-04T10:50:25.3837068Z no change /opt/conda/bin/deactivate
2020-03-04T10:50:25.3837223Z no change /opt/conda/etc/profile.d/conda.sh
2020-03-04T10:50:25.3837381Z no change /opt/conda/etc/fish/conf.d/conda.fish
2020-03-04T10:50:25.3837539Z no change /opt/conda/shell/condabin/Conda.psm1
2020-03-04T10:50:25.3837904Z no change /opt/conda/shell/condabin/conda-hook.ps1
2020-03-04T10:50:25.3838273Z no change /opt/conda/lib/python3.7/site-packages/xontrib/conda.xsh
2020-03-04T10:50:25.3838453Z no change /opt/conda/etc/profile.d/conda.csh
2020-03-04T10:50:25.3838607Z modified /github/home/.bashrc
2020-03-04T10:50:25.3838685Z
2020-03-04T10:50:25.3839039Z ==> For changes to take effect, close and re-open your current shell. <==
2020-03-04T10:50:25.3839151Z
2020-03-04T10:50:25.4034555Z + conda create -n myenv python=3.6
2020-03-04T10:50:25.9277471Z Collecting package metadata (current_repodata.json): ...working... done
2020-03-04T10:50:25.9642682Z Solving environment: ...working... done
2020-03-04T10:50:26.0490197Z
2020-03-04T10:50:26.0490460Z ## Package Plan ##
2020-03-04T10:50:26.0490535Z
2020-03-04T10:50:26.0490685Z environment location: /opt/conda/envs/myenv
2020-03-04T10:50:26.0490781Z
2020-03-04T10:50:26.0490926Z added / updated specs:
2020-03-04T10:50:26.0491662Z - python=3.6
2020-03-04T10:50:26.0491756Z
2020-03-04T10:50:26.0491824Z
2020-03-04T10:50:26.0491969Z The following packages will be downloaded:
2020-03-04T10:50:26.0492062Z
2020-03-04T10:50:26.0492460Z package | build
2020-03-04T10:50:26.0492845Z ---------------------------|-----------------
2020-03-04T10:50:26.0493211Z _libgcc_mutex-0.1 | main 3 KB
2020-03-04T10:50:26.0493583Z certifi-2019.11.28 | py36_0 153 KB
2020-03-04T10:50:26.0493961Z ld_impl_linux-64-2.33.1 | h53a641e_7 568 KB
2020-03-04T10:50:26.0494323Z libedit-3.1.20181209 | hc058e9b_0 163 KB
2020-03-04T10:50:26.0494679Z libffi-3.2.1 | hd88cf55_4 40 KB
2020-03-04T10:50:26.0495035Z libgcc-ng-9.1.0 | hdf63c60_0 5.1 MB
2020-03-04T10:50:26.0495392Z libstdcxx-ng-9.1.0 | hdf63c60_0 3.1 MB
2020-03-04T10:50:26.0495732Z ncurses-6.2 | he6710b0_0 1.1 MB
2020-03-04T10:50:26.0496087Z pip-20.0.2 | py36_1 1.7 MB
2020-03-04T10:50:26.0496448Z python-3.6.10 | h0371630_0 29.7 MB
2020-03-04T10:50:26.0496812Z readline-7.0 | h7b6447c_5 324 KB
2020-03-04T10:50:26.0497169Z setuptools-45.2.0 | py36_0 520 KB
2020-03-04T10:50:26.0497519Z sqlite-3.31.1 | h7b6447c_0 1.1 MB
2020-03-04T10:50:26.0497879Z tk-8.6.8 | hbc83047_0 2.8 MB
2020-03-04T10:50:26.0498229Z wheel-0.34.2 | py36_0 51 KB
2020-03-04T10:50:26.0498579Z xz-5.2.4 | h14c3975_4 283 KB
2020-03-04T10:50:26.0498931Z zlib-1.2.11 | h7b6447c_3 103 KB
2020-03-04T10:50:26.0499273Z ------------------------------------------------------------
2020-03-04T10:50:26.0499450Z Total: 46.7 MB
2020-03-04T10:50:26.0499541Z
2020-03-04T10:50:26.0499681Z The following NEW packages will be INSTALLED:
2020-03-04T10:50:26.0499758Z
2020-03-04T10:50:26.0500489Z _libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main
2020-03-04T10:50:26.0501110Z ca-certificates pkgs/main/linux-64::ca-certificates-2020.1.1-0
2020-03-04T10:50:26.0501722Z certifi pkgs/main/linux-64::certifi-2019.11.28-py36_0
2020-03-04T10:50:26.0502345Z ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.33.1-h53a641e_7
2020-03-04T10:50:26.0502969Z libedit pkgs/main/linux-64::libedit-3.1.20181209-hc058e9b_0
2020-03-04T10:50:26.0503577Z libffi pkgs/main/linux-64::libffi-3.2.1-hd88cf55_4
2020-03-04T10:50:26.0504179Z libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0
2020-03-04T10:50:26.0504791Z libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0
2020-03-04T10:50:26.0505394Z ncurses pkgs/main/linux-64::ncurses-6.2-he6710b0_0
2020-03-04T10:50:26.0506278Z openssl pkgs/main/linux-64::openssl-1.1.1d-h7b6447c_4
2020-03-04T10:50:26.0506910Z pip pkgs/main/linux-64::pip-20.0.2-py36_1
2020-03-04T10:50:26.0507509Z python pkgs/main/linux-64::python-3.6.10-h0371630_0
2020-03-04T10:50:26.0508111Z readline pkgs/main/linux-64::readline-7.0-h7b6447c_5
2020-03-04T10:50:26.0508708Z setuptools pkgs/main/linux-64::setuptools-45.2.0-py36_0
2020-03-04T10:50:26.0509475Z sqlite pkgs/main/linux-64::sqlite-3.31.1-h7b6447c_0
2020-03-04T10:50:26.0510365Z tk pkgs/main/linux-64::tk-8.6.8-hbc83047_0
2020-03-04T10:50:26.0510964Z wheel pkgs/main/linux-64::wheel-0.34.2-py36_0
2020-03-04T10:50:26.0511551Z xz pkgs/main/linux-64::xz-5.2.4-h14c3975_4
2020-03-04T10:50:26.0512141Z zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3
2020-03-04T10:50:26.0512229Z
2020-03-04T10:50:26.0512313Z
2020-03-04T10:50:26.0512447Z Proceed ([y]/n)?
2020-03-04T10:50:26.0519068Z
2020-03-04T10:50:26.0519270Z Downloading and Extracting Packages
2020-03-04T10:50:26.0519494Z
2020-03-04T10:50:26.1318915Z libedit-3.1.20181209 | 163 KB | | 0%
2020-03-04T10:50:26.1319506Z libedit-3.1.20181209 | 163 KB | ########## | 100%
2020-03-04T10:50:26.1319646Z
2020-03-04T10:50:26.3010399Z libgcc-ng-9.1.0 | 5.1 MB | | 0%
2020-03-04T10:50:26.3016426Z libgcc-ng-9.1.0 | 5.1 MB | ########## | 100%
2020-03-04T10:50:26.3016551Z
2020-03-04T10:50:26.4178401Z tk-8.6.8 | 2.8 MB | | 0%
2020-03-04T10:50:26.4189777Z tk-8.6.8 | 2.8 MB | ########## | 100%
2020-03-04T10:50:26.4190390Z
2020-03-04T10:50:26.4512708Z zlib-1.2.11 | 103 KB | | 0%
2020-03-04T10:50:26.4513072Z zlib-1.2.11 | 103 KB | ########## | 100%
2020-03-04T10:50:26.4513153Z
2020-03-04T10:50:26.4688184Z _libgcc_mutex-0.1 | 3 KB | | 0%
2020-03-04T10:50:26.4694660Z _libgcc_mutex-0.1 | 3 KB | ########## | 100%
2020-03-04T10:50:26.4694795Z
2020-03-04T10:50:26.8854186Z ncurses-6.2 | 1.1 MB | | 0%
2020-03-04T10:50:26.8854823Z ncurses-6.2 | 1.1 MB | ########## | 100%
2020-03-04T10:50:26.8854969Z
2020-03-04T10:50:26.9238174Z ld_impl_linux-64-2.3 | 568 KB | | 0%
2020-03-04T10:50:26.9238718Z ld_impl_linux-64-2.3 | 568 KB | ########## | 100%
2020-03-04T10:50:26.9238853Z
2020-03-04T10:50:26.9544406Z libffi-3.2.1 | 40 KB | | 0%
2020-03-04T10:50:26.9545028Z libffi-3.2.1 | 40 KB | ########## | 100%
2020-03-04T10:50:26.9545172Z
2020-03-04T10:50:26.9767359Z certifi-2019.11.28 | 153 KB | | 0%
2020-03-04T10:50:26.9773520Z certifi-2019.11.28 | 153 KB | ########## | 100%
2020-03-04T10:50:26.9773720Z
2020-03-04T10:50:27.1019191Z pip-20.0.2 | 1.7 MB | | 0%
2020-03-04T10:50:27.1019873Z pip-20.0.2 | 1.7 MB | ########## | 100%
2020-03-04T10:50:27.1020024Z
2020-03-04T10:50:27.1284184Z wheel-0.34.2 | 51 KB | | 0%
2020-03-04T10:50:27.1284729Z wheel-0.34.2 | 51 KB | ########## | 100%
2020-03-04T10:50:27.1284982Z
2020-03-04T10:50:27.1635023Z xz-5.2.4 | 283 KB | | 0%
2020-03-04T10:50:27.1635574Z xz-5.2.4 | 283 KB | ########## | 100%
2020-03-04T10:50:27.1635690Z
2020-03-04T10:50:27.1957966Z readline-7.0 | 324 KB | | 0%
2020-03-04T10:50:27.1958452Z readline-7.0 | 324 KB | ########## | 100%
2020-03-04T10:50:27.1958592Z
2020-03-04T10:50:27.2419085Z sqlite-3.31.1 | 1.1 MB | | 0%
2020-03-04T10:50:27.2419569Z sqlite-3.31.1 | 1.1 MB | ########## | 100%
2020-03-04T10:50:27.2419671Z
2020-03-04T10:50:27.3421516Z python-3.6.10 | 29.7 MB | | 0%
2020-03-04T10:50:27.4422601Z python-3.6.10 | 29.7 MB | ##4 | 24%
2020-03-04T10:50:27.9656988Z python-3.6.10 | 29.7 MB | ####### | 71%
2020-03-04T10:50:27.9657727Z python-3.6.10 | 29.7 MB | ########## | 100%
2020-03-04T10:50:27.9657877Z
2020-03-04T10:50:28.0623746Z libstdcxx-ng-9.1.0 | 3.1 MB | | 0%
2020-03-04T10:50:28.0624544Z libstdcxx-ng-9.1.0 | 3.1 MB | ########## | 100%
2020-03-04T10:50:28.0624701Z
2020-03-04T10:50:28.1067800Z setuptools-45.2.0 | 520 KB | | 0%
2020-03-04T10:50:28.1068596Z setuptools-45.2.0 | 520 KB | ########## | 100%
2020-03-04T10:50:28.3879641Z Preparing transaction: ...working... done
2020-03-04T10:50:29.1497815Z Verifying transaction: ...working... done
2020-03-04T10:50:29.7835172Z Executing transaction: ...working... done
2020-03-04T10:50:29.7902932Z #
2020-03-04T10:50:29.7903104Z # To activate this environment, use
2020-03-04T10:50:29.7903242Z #
2020-03-04T10:50:29.7903364Z # $ conda activate myenv
2020-03-04T10:50:29.7903502Z #
2020-03-04T10:50:29.7903644Z # To deactivate an active environment, use
2020-03-04T10:50:29.7903785Z #
2020-03-04T10:50:29.7903919Z # $ conda deactivate
2020-03-04T10:50:29.7904010Z
2020-03-04T10:50:29.9029429Z + conda activate myenv
2020-03-04T10:50:30.0185463Z
2020-03-04T10:50:30.0186577Z CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
2020-03-04T10:50:30.0186790Z To initialize your shell, run
2020-03-04T10:50:30.0186898Z
2020-03-04T10:50:30.0187066Z $ conda init <SHELL_NAME>
2020-03-04T10:50:30.0187175Z
2020-03-04T10:50:30.0187334Z Currently supported shells are:
2020-03-04T10:50:30.0187683Z - bash
2020-03-04T10:50:30.0188017Z - fish
2020-03-04T10:50:30.0188343Z - tcsh
2020-03-04T10:50:30.0188668Z - xonsh
2020-03-04T10:50:30.0188995Z - zsh
2020-03-04T10:50:30.0189331Z - powershell
2020-03-04T10:50:30.0189418Z
2020-03-04T10:50:30.0189810Z See 'conda init --help' for more information and options.
2020-03-04T10:50:30.0190158Z
2020-03-04T10:50:30.0190604Z IMPORTANT: You may need to close and restart your shell after running 'conda init'.
I also faced the same issue while trying to activate the conda env. I small change to the command mentioned by #FlyingTeller, is to use something like
conda create -n <YOUR_ENV_NAME> python=3.6
conda info
$CONDA/bin/activate <YOUR_ENV_NAME> # to activate the env
It is an old question without an answer, but anyone like me who faced this issue recently can try the above solution.

Kubernetes sends traffic to the pod even after sending SIGTERM

I have a SpringBoot project with graceful shutdown configured. Deployed on k8s 1.12.7 Here are the logs,
2019-07-20 10:23:16.180 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Received shutdown event
2019-07-20 10:23:16.180 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Waiting for 30s to finish
2019-07-20 10:23:16.273 INFO [service,fd964ebaa631a860,75a07c123397e4ff,false] 1 --- [io-8080-exec-10] com.jay.resource.ProductResource : GET /products?id=59
2019-07-20 10:23:16.374 INFO [service,9a569ecd8c448e98,00bc11ef2776d7fb,false] 1 --- [nio-8080-exec-1] com.jay.resource.ProductResource : GET /products?id=68
...
2019-07-20 10:23:33.711 INFO [service,1532d6298acce718,08cfb8085553b02e,false] 1 --- [nio-8080-exec-9] com.jay.resource.ProductResource : GET /products?id=209
2019-07-20 10:23:46.181 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Resumed after hibernation
2019-07-20 10:23:46.216 INFO [service,,,] 1 --- [ Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
Application has received the SIGTERM at 10:23:16.180 from Kubernetes. As per Termination of Pods point#5 says that the terminating pod is removed from the endpoints list of service, but it is contradicting that it forwarded the requests for 17 seconds (until 10:23:33.711) after sending SIGTERM signal. Is there any configuration missing?
Dockerfile
FROM openjdk:8-jre-slim
MAINTAINER Jay
RUN apt update && apt install -y curl libtcnative-1 gcc && apt clean
ADD build/libs/sample-service.jar /
CMD ["java", "-jar" , "sample-service.jar"]
GracefulShutdown
// https://github.com/spring-projects/spring-boot/issues/4657
class GracefulShutdown(val waitTime: Long, val timeout: Long) : TomcatConnectorCustomizer, ApplicationListener<ContextClosedEvent> {
#Volatile
private var connector: Connector? = null
override fun customize(connector: Connector) {
this.connector = connector
}
override fun onApplicationEvent(event: ContextClosedEvent) {
log.info("Received shutdown event")
val executor = this.connector?.protocolHandler?.executor
if (executor is ThreadPoolExecutor) {
try {
val threadPoolExecutor: ThreadPoolExecutor = executor
log.info("Waiting for ${waitTime}s to finish")
hibernate(waitTime * 1000)
log.info("Resumed after hibernation")
this.connector?.pause()
threadPoolExecutor.shutdown()
if (!threadPoolExecutor.awaitTermination(timeout, TimeUnit.SECONDS)) {
log.warn("Tomcat thread pool did not shut down gracefully within $timeout seconds. Proceeding with forceful shutdown")
threadPoolExecutor.shutdownNow()
if (!threadPoolExecutor.awaitTermination(timeout, TimeUnit.SECONDS)) {
log.error("Tomcat thread pool did not terminate")
}
}
} catch (ex: InterruptedException) {
log.info("Interrupted")
Thread.currentThread().interrupt()
}
}else
this.connector?.pause()
}
private fun hibernate(time: Long){
try {
Thread.sleep(time)
}catch (ex: Exception){}
}
companion object {
private val log = LoggerFactory.getLogger(GracefulShutdown::class.java)
}
}
#Configuration
class GracefulShutdownConfig(#Value("\${app.shutdown.graceful.wait-time:30}") val waitTime: Long,
#Value("\${app.shutdown.graceful.timeout:30}") val timeout: Long) {
companion object {
private val log = LoggerFactory.getLogger(GracefulShutdownConfig::class.java)
}
#Bean
fun gracefulShutdown(): GracefulShutdown {
return GracefulShutdown(waitTime, timeout)
}
#Bean
fun webServerFactory(gracefulShutdown: GracefulShutdown): ConfigurableServletWebServerFactory {
log.info("GracefulShutdown configured with wait: ${waitTime}s and timeout: ${timeout}s")
val factory = TomcatServletWebServerFactory()
factory.addConnectorCustomizers(gracefulShutdown)
return factory
}
}
deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: service
name: service
spec:
progressDeadlineSeconds: 420
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
k8s-app: service
strategy:
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
k8s-app: service
spec:
terminationGracePeriodSeconds: 60
containers:
- env:
- name: SPRING_PROFILES_ACTIVE
value: dev
image: service:2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 20
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
name: service
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
failureThreshold: 60
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 100
periodSeconds: 10
timeoutSeconds: 5
UPDATE:
Added custom health check endpoint
#RestControllerEndpoint(id = "live")
#Component
class LiveEndpoint {
companion object {
private val log = LoggerFactory.getLogger(LiveEndpoint::class.java)
}
#Autowired
private lateinit var gracefulShutdownStatus: GracefulShutdownStatus
#GetMapping
fun live(): ResponseEntity<Any> {
val status = if(gracefulShutdownStatus.isTerminating())
HttpStatus.INTERNAL_SERVER_ERROR.value()
else
HttpStatus.OK.value()
log.info("Status: $status")
return ResponseEntity.status(status).build()
}
}
Changed the livenessProbe,
livenessProbe:
httpGet:
path: /actuator/live
port: 8080
initialDelaySeconds: 100
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
Here are the logs after the change,
2019-07-21 14:13:01.431 INFO [service,9b65b26907f2cf8f,9b65b26907f2cf8f,false] 1 --- [nio-8080-exec-2] com.jay.util.LiveEndpoint : Status: 200
2019-07-21 14:13:01.444 INFO [service,3da259976f9c286c,64b0d5973fddd577,false] 1 --- [nio-8080-exec-3] com.jay.resource.ProductResource : GET /products?id=52
2019-07-21 14:13:01.609 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Received shutdown event
2019-07-21 14:13:01.610 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Waiting for 30s to finish
...
2019-07-21 14:13:06.431 INFO [service,002c0da2133cf3b0,002c0da2133cf3b0,false] 1 --- [nio-8080-exec-3] com.jay.util.LiveEndpoint : Status: 500
2019-07-21 14:13:06.433 INFO [service,072abbd7275103ce,d1ead06b4abf2a34,false] 1 --- [nio-8080-exec-4] com.jay.resource.ProductResource : GET /products?id=96
...
2019-07-21 14:13:11.431 INFO [service,35aa09a8aea64ae6,35aa09a8aea64ae6,false] 1 --- [io-8080-exec-10] com.jay.util.LiveEndpoint : Status: 500
2019-07-21 14:13:11.508 INFO [service,a78c924f75538a50,0314f77f21076313,false] 1 --- [nio-8080-exec-2] com.jay.resource.ProductResource : GET /products?id=110
...
2019-07-21 14:13:16.431 INFO [service,38a940dfda03956b,38a940dfda03956b,false] 1 --- [nio-8080-exec-9] com.jay.util.LiveEndpoint : Status: 500
2019-07-21 14:13:16.593 INFO [service,d76e81012934805f,b61cb062154bb7f0,false] 1 --- [io-8080-exec-10] com.jay.resource.ProductResource : GET /products?id=152
...
2019-07-21 14:13:29.634 INFO [service,38a32a20358a7cc4,2029de1ed90e9539,false] 1 --- [nio-8080-exec-6] com.jay.resource.ProductResource : GET /products?id=191
2019-07-21 14:13:31.610 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Resumed after hibernation
2019-07-21 14:13:31.692 INFO [service,,,] 1 --- [ Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
With the livenessProbe of 3 failures, kubernetes served the traffic for 13 seconds after liveness failures i.e., from 14:13:16.431 to 14:13:29.634.
UPDATE 2:
The sequence of events (thanks to Eamonn McEvoy)
seconds | healthy | events
0 | ✔ | * liveness probe healthy
1 | ✔ | - SIGTERM
2 | ✔ |
3 | ✔ |
4 | ✔ |
5 | ✔ | * liveness probe unhealthy (1/3)
6 | ✔ |
7 | ✔ |
8 | ✔ |
9 | ✔ |
10 | ✔ | * liveness probe unhealthy (2/3)
11 | ✔ |
12 | ✔ |
13 | ✔ |
14 | ✔ |
15 | ✘ | * liveness probe unhealthy (3/3)
.. | ✔ | * traffic is served
28 | ✔ | * traffic is served
29 | ✘ | * pod restarts
SIGTERM isn't putting the pod into a terminating state immediately. You can see in the logs your application begins graceful shutdown at 10:23:16.180 and takes >20 seconds to complete. At this point, the container stops and pod can enter the terminating state.
As far as kubernetes is concerned the pod looks ok during the graceful shutdown period. You need to add a liveness probe to your deployment; when it becomes unhealthy the traffic will stop.
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 100
periodSeconds: 10
timeoutSeconds: 5
Update:
This is because you have a failure threshold of 3, so you are allowing traffic for up to 15 seconds after the sigterm;
e.g.
seconds | healthy | events
0 | ✔ | * liveness probe healthy
1 | ✔ | - SIGTERM
2 | ✔ |
3 | ✔ |
4 | ✔ |
5 | ✔ | * liveness probe issued
6 | ✔ | .
7 | ✔ | .
8 | ✔ | .
9 | ✔ | .
10 | ✔ | * liveness probe timeout - unhealthy (1/3)
11 | ✔ |
12 | ✔ |
13 | ✔ |
14 | ✔ |
15 | ✔ | * liveness probe issued
16 | ✔ | .
17 | ✔ | .
18 | ✔ | .
19 | ✔ | .
20 | ✔ | * liveness probe timeout - unhealthy (2/3)
21 | ✔ |
22 | ✔ |
23 | ✔ |
24 | ✔ |
25 | ✔ | * liveness probe issued
26 | ✔ | .
27 | ✔ | .
28 | ✔ | .
29 | ✔ | .
30 | ✘ | * liveness probe timeout - unhealthy (3/3)
| | * pod restarts
This is assuming that the endpoint returns an unhealthy response during the graceful shutdown. Since you have timeoutSeconds: 5, if the probe simply times out this will take much longer, with a 5 second delay between issuing a liveness probe request and receiving its response. It could be the case that the container actually dies before the liveness threshold is hit and you are still seeing the original behaviour

Artifactory issue (maybe Derby related)

A few days back I noticed that my artifactory instance was not running anymore. Now I am not able to start it again. In the localhost logs in /opt/jfrog/artifactory/tomcat/logs I found a long stack trace, but I am not sure whether that is the actual problem because It seems to appear already at a time where everything was still working fine.
Update 2 weeks later: A few days after initially writing this question, artifactory was suddenly running again. I did not understand why, since nothing I tried seemed to help. Now, the same issue is back...
The localhost log:
19-Jul-2018 10:50:34.781 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/var/opt/jfrog/artifactory' resolved from: System property
19-Jul-2018 10:50:38.843 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
19-Jul-2018 10:51:57.405 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/var/opt/jfrog/artifactory' resolved from: System property
19-Jul-2018 10:52:03.598 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
19-Jul-2018 10:53:07.428 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/var/opt/jfrog/artifactory' resolved from: System property
19-Jul-2018 10:53:15.436 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
19-Jul-2018 10:53:32.409 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.artifactory.webapp.servlet.ArtifactoryHomeConfigListener]
java.lang.RuntimeException: Could't establish connection with db: jdbc:derby:/var/opt/jfrog/artifactory/data/derby;create=true
at org.jfrog.config.db.TemporaryDBChannel.<init>(TemporaryDBChannel.java:35)
at org.artifactory.common.ArtifactoryConfigurationAdapter.getDbChannel(ArtifactoryConfigurationAdapter.java:183)
at org.jfrog.config.wrappers.ConfigurationManagerImpl.getDBChannel(ConfigurationManagerImpl.java:422)
at org.artifactory.config.MasterKeyBootstrapUtil.dbChannel(MasterKeyBootstrapUtil.java:206)
at org.artifactory.config.MasterKeyBootstrapUtil.tryToCreateTable(MasterKeyBootstrapUtil.java:95)
at org.artifactory.config.MasterKeyBootstrapUtil.validateOrInsertKeyInformation(MasterKeyBootstrapUtil.java:65)
at org.artifactory.config.MasterKeyBootstrapUtil.handleMasterKey(MasterKeyBootstrapUtil.java:46)
at org.artifactory.webapp.servlet.BasicConfigManagers.initHomes(BasicConfigManagers.java:95)
at org.artifactory.webapp.servlet.BasicConfigManagers.initialize(BasicConfigManagers.java:81)
at org.artifactory.webapp.servlet.ArtifactoryHomeConfigListener.contextInitialized(ArtifactoryHomeConfigListener.java:53)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4745)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:630)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1842)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Failed to start database '/var/opt/jfrog/artifactory/data/derby' with class loader java.net.URLClassLoader#e9e54c2, see the next exception for details.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
at org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(Unknown Source)
at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
at org.apache.derby.jdbc.EmbeddedDriver.connect(Unknown Source)
at org.jfrog.config.db.TemporaryDBChannel.<init>(TemporaryDBChannel.java:31)
... 22 more
Caused by: ERROR XJ040: Failed to start database '/var/opt/jfrog/artifactory/data/derby' with class loader java.net.URLClassLoader#e9e54c2, see the next exception for details.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 32 more
Caused by: ERROR XSDB6: Another instance of Derby may have already booted the database /var/opt/jfrog/artifactory/data/derby.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.privGetJBMSLockOnDB(Unknown Source)
at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.getJBMSLockOnDB(Unknown Source)
at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startModule(Unknown Source)
at org.apache.derby.impl.services.monitor.FileMonitor.startModule(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Unknown Source)
at org.apache.derby.impl.store.raw.RawStore.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startModule(Unknown Source)
at org.apache.derby.impl.services.monitor.FileMonitor.startModule(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Unknown Source)
at org.apache.derby.impl.store.access.RAMAccessManager.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startModule(Unknown Source)
at org.apache.derby.impl.services.monitor.FileMonitor.startModule(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Unknown Source)
at org.apache.derby.impl.db.BasicDatabase.bootStore(Unknown Source)
at org.apache.derby.impl.db.BasicDatabase.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startProviderService(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Unknown Source)
... 29 more
19-Jul-2018 10:53:32.503 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.artifactory.webapp.servlet.logback.LogbackConfigListener]
java.lang.IllegalStateException: Artifactory home not initialized
at org.artifactory.webapp.servlet.logback.LogbackConfigListener.initArtifactoryHome(LogbackConfigListener.java:55)
at org.artifactory.webapp.servlet.logback.LogbackConfigListener.contextInitialized(LogbackConfigListener.java:47)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4745)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:630)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1842)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
19-Jul-2018 10:53:32.569 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.artifactory.webapp.servlet.ArtifactoryContextConfigListener]
java.lang.IllegalStateException: Artifactory home not initialized.
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.getArtifactoryHome(ArtifactoryContextConfigListener.java:176)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.setSessionTrackingMode(ArtifactoryContextConfigListener.java:150)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.contextInitialized(ArtifactoryContextConfigListener.java:77)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4745)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:630)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1842)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
19-Jul-2018 10:54:44.743 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
19-Jul-2018 10:56:21.997 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log jolokia: No access restrictor found, access to any MBean is allowed
19-Jul-2018 17:22:06.575 INFO [localhost-startStop-5] org.apache.catalina.core.ApplicationContext.log Closing Spring root WebApplicationContext
19-Jul-2018 17:24:22.721 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/var/opt/jfrog/artifactory' resolved from: System property
19-Jul-2018 17:24:27.463 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
19-Jul-2018 17:26:29.617 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
artifactory.log with severall restart attempts. Note: it always gets stuck at '[art-init] [INFO ] (o.a.w.s.ArtifactoryContextConfigListener:281) -'
/ \ _ __| |_ _| |_ __ _ ___| |_ ___ _ __ _ _ | | | | (___| (___
/ /\ \ | '__| __| | _/ _` |/ __| __/ _ \| '__| | | | | | | |\___ \\___ \
/ ____ \| | | |_| | || (_| | (__| || (_) | | | |_| | | |__| |____) |___) |
/_/ \_\_| \__|_|_| \__,_|\___|\__\___/|_| \__, | \____/|_____/_____/
Version: 5.9.0 __/ |
Revision: 50900900 |___/
Artifactory Home: '/var/opt/jfrog/artifactory'
2018-08-01 16:52:01,039 [art-init] [WARN ] (o.a.f.l.ArtifactoryLockFile:65) - Found existing lock file. Artifactory was not shutdown properly. [/var/opt/jfrog/artifactory/data/.lock]
2018-08-01 16:52:03,744 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:484) - Artifactory application context set to NOT READY by refresh
2018-08-01 16:52:03,777 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:227) - Refreshing artifactory: startup date [Wed Aug 01 16:52:03 UTC 2018]; root of context hierarchy
2018-08-01 16:53:29,163 [art-init] [INFO ] (o.a.s.d.DbServiceImpl:217) - Database: Apache Derby 10.11.1.1 - (1616546). Driver: Apache Derby Embedded JDBC Driver 10.11.1.1 - (1616546) Pool: derby
2018-08-01 16:53:29,178 [art-init] [INFO ] (o.a.s.d.DbServiceImpl:220) - Connection URL: jdbc:derby:/var/opt/jfrog/artifactory/data/derby
2018-08-01 16:54:05,176 [art-init] [INFO ] (o.j.s.b.p.t.BinaryProviderClassScanner:76) - Added 'blob' from jar:file:/opt/jfrog/artifactory/tomcat/webapps/artifactory/WEB-INF/lib/artifactory-storage-db-5.9.0.jar!/
2018-08-01 16:54:05,524 [art-init] [INFO ] (o.j.s.b.p.t.BinaryProviderClassScanner:76) - Added 'empty, external-file, external-wrapper, file-system, cache-fs, retry' from jar:file:/opt/jfrog/artifactory/tomcat/webapps/artifactory/WEB-INF/lib/binary-store-core-2.0.37.jar!/
2018-08-01 16:54:49,865 [art-init] [INFO ] (o.a.s.ArtifactorySchedulerFactoryBean:647) - Starting Quartz Scheduler now
2018-08-01 16:54:52,478 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:234) - Artifactory context starting up 39 Spring Beans...
2018-08-01 16:55:04,519 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:356) - Initialzed new service id: jfrt#01c72726yd871j0wqyck0k0n0z
2018-08-01 16:55:05,338 [art-init] [INFO ] (o.j.s.c.EncryptionWrapperFactory:33) - createArtifactoryKeyWrapper EncryptionWrapperBase{ encodingType=ARTIFACTORY_MASTER, topEncrypter=BytesEncrypterBase{ Cipher='DESede', keyId='22QC5'}, formatUsed=OldFormat, decrypters=[BytesEncrypterBase{ Cipher='DESede', keyId='22QC5'}]}
2018-08-01 16:55:05,687 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:556) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2018-08-01 16:55:09,887 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:279) - Waiting for access server...
2018-08-01 16:56:49,406 [art-init] [INFO ] (o.a.w.s.ArtifactoryContextConfigListener:281) -
_ _ __ _ ____ _____ _____
/\ | | (_)/ _| | | / __ \ / ____/ ____|
/ \ _ __| |_ _| |_ __ _ ___| |_ ___ _ __ _ _ | | | | (___| (___
/ /\ \ | '__| __| | _/ _` |/ __| __/ _ \| '__| | | | | | | |\___ \\___ \
/ ____ \| | | |_| | || (_| | (__| || (_) | | | |_| | | |__| |____) |___) |
/_/ \_\_| \__|_|_| \__,_|\___|\__\___/|_| \__, | \____/|_____/_____/
Version: 5.9.0 __/ |
Revision: 50900900 |___/
Artifactory Home: '/var/opt/jfrog/artifactory'
2018-08-01 16:56:49,510 [art-init] [WARN ] (o.a.f.l.ArtifactoryLockFile:65) - Found existing lock file. Artifactory was not shutdown properly. [/var/opt/jfrog/artifactory/data/.lock]
2018-08-01 16:56:52,853 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:484) - Artifactory application context set to NOT READY by refresh
2018-08-01 16:56:52,908 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:227) - Refreshing artifactory: startup date [Wed Aug 01 16:56:52 UTC 2018]; root of context hierarchy
2018-08-01 17:00:01,718 [art-init] [INFO ] (o.a.w.s.ArtifactoryContextConfigListener:281) -
_ _ __ _ ____ _____ _____
/\ | | (_)/ _| | | / __ \ / ____/ ____|
/ \ _ __| |_ _| |_ __ _ ___| |_ ___ _ __ _ _ | | | | (___| (___
/ /\ \ | '__| __| | _/ _` |/ __| __/ _ \| '__| | | | | | | |\___ \\___ \
/ ____ \| | | |_| | || (_| | (__| || (_) | | | |_| | | |__| |____) |___) |
/_/ \_\_| \__|_|_| \__,_|\___|\__\___/|_| \__, | \____/|_____/_____/
Version: 5.9.0 __/ |
Revision: 50900900 |___/
Artifactory Home: '/var/opt/jfrog/artifactory'
2018-08-01 17:00:01,827 [art-init] [WARN ] (o.a.f.l.ArtifactoryLockFile:65) - Found existing lock file. Artifactory was not shutdown properly. [/var/opt/jfrog/artifactory/data/.lock]
2018-08-01 17:00:04,617 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:484) - Artifactory application context set to NOT READY by refresh
2018-08-01 17:00:04,619 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:227) - Refreshing artifactory: startup date [Wed Aug 01 17:00:04 UTC 2018]; root of context hierarchy
2018-08-01 17:03:09,821 [art-init] [INFO ] (o.a.w.s.ArtifactoryContextConfigListener:281) -
When I try to start the service via systemctl start artifactory.service it fails with "Job for artifactory.service failed because a timeout was exceeded. See "systemctl status artifactory.service" and "journalctl -xe" for details.".
Output systemctl status artifactory.service:
nikl#nikls-droplet-1:/opt/jfrog/artifactory/tomcat/logs$ sudo systemctl status artifactory.service
● artifactory.service - Setup Systemd script for Artifactory in Tomcat Servlet Engine
Loaded: loaded (/lib/systemd/system/artifactory.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Fri 2018-07-20 23:19:14 UTC; 25min ago
Process: 18829 ExecStart=/opt/jfrog/artifactory/bin/artifactoryManage.sh start (code=killed, signal=TERM)
Jul 20 23:18:23 nikls-droplet-1 artifactoryManage.sh[18829]: /usr/bin/java
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Starting Artifactory tomcat as user artifactory...
Jul 20 23:18:24 nikls-droplet-1 su[18851]: Successful su for artifactory by root
Jul 20 23:18:24 nikls-droplet-1 su[18851]: + ??? root:artifactory
Jul 20 23:18:24 nikls-droplet-1 su[18851]: pam_unix(su:session): session opened for user artifactory by (uid=0)
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Max number of open files: 1024
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Using ARTIFACTORY_HOME: /var/opt/jfrog/artifactory
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Using ARTIFACTORY_PID: /var/opt/jfrog/run/artifactory.pid
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Tomcat started.
Jul 20 23:19:14 nikls-droplet-1 systemd[1]: Stopped Setup Systemd script for Artifactory in Tomcat Servlet Engine.
Output journalctl -xe:
nikl#nikls-droplet-1:/opt/jfrog/artifactory/tomcat/logs$ sudo journalctl -xe
-- Kernel start-up required KERNEL_USEC microseconds.
--
-- Initial RAM disk start-up required INITRD_USEC microseconds.
--
-- Userspace start-up required 108849 microseconds.
Jul 20 23:54:42 nikls-droplet-1 systemd[1]: Started User Manager for UID 1001.
-- Subject: Unit user#1001.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user#1001.service has finished starting up.
--
-- The start-up result is done.
Jul 20 23:54:42 nikls-droplet-1 artifactoryManage.sh[19282]: Max number of open files: 1024
Jul 20 23:54:42 nikls-droplet-1 artifactoryManage.sh[19282]: Using ARTIFACTORY_HOME: /var/opt/jfrog/artifactory
Jul 20 23:54:42 nikls-droplet-1 artifactoryManage.sh[19282]: Using ARTIFACTORY_PID: /var/opt/jfrog/run/artifactory.pid
Jul 20 23:54:42 nikls-droplet-1 artifactoryManage.sh[19282]: Tomcat started.
Jul 20 23:54:42 nikls-droplet-1 su[19305]: pam_unix(su:session): session closed for user artifactory
Jul 20 23:56:11 nikls-droplet-1 systemd[1]: artifactory.service: Start operation timed out. Terminating.
Jul 20 23:56:11 nikls-droplet-1 systemd[1]: Failed to start Setup Systemd script for Artifactory in Tomcat Servlet Engine.
-- Subject: Unit artifactory.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit artifactory.service has failed.
--
-- The result is failed.
Jul 20 23:56:11 nikls-droplet-1 systemd[1]: artifactory.service: Unit entered failed state.
Jul 20 23:56:11 nikls-droplet-1 systemd[1]: artifactory.service: Failed with result 'timeout'.
Jul 20 23:56:11 nikls-droplet-1 polkitd(authority=local)[1443]: Unregistered Authentication Agent for unix-process:19273:10938586 (system bus name :1.317, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jul 20 23:56:16 nikls-droplet-1 systemd[1]: artifactory.service: Service hold-off time over, scheduling restart.
Jul 20 23:56:16 nikls-droplet-1 systemd[1]: Stopped Setup Systemd script for Artifactory in Tomcat Servlet Engine.
-- Subject: Unit artifactory.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit artifactory.service has finished shutting down.
Jul 20 23:56:16 nikls-droplet-1 systemd[1]: Starting Setup Systemd script for Artifactory in Tomcat Servlet Engine...
-- Subject: Unit artifactory.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit artifactory.service has begun starting up.
Jul 20 23:56:16 nikls-droplet-1 artifactoryManage.sh[19863]: /usr/bin/java
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Starting Artifactory tomcat as user artifactory...
Jul 20 23:56:17 nikls-droplet-1 su[19885]: Successful su for artifactory by root
Jul 20 23:56:17 nikls-droplet-1 su[19885]: + ??? root:artifactory
Jul 20 23:56:17 nikls-droplet-1 su[19885]: pam_unix(su:session): session opened for user artifactory by (uid=0)
Jul 20 23:56:17 nikls-droplet-1 systemd-logind[1395]: New session c140 of user artifactory.
-- Subject: A new session c140 has been created for user artifactory
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
--
-- A new session with the ID c140 has been created for the user artifactory.
--
-- The leading process of the session is 19885.
Jul 20 23:56:17 nikls-droplet-1 systemd[1]: Started Session c140 of user artifactory.
-- Subject: Unit session-c140.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-c140.scope has finished starting up.
--
-- The start-up result is done.
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Max number of open files: 1024
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Using ARTIFACTORY_HOME: /var/opt/jfrog/artifactory
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Using ARTIFACTORY_PID: /var/opt/jfrog/run/artifactory.pid
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Tomcat started.
Jul 20 23:56:17 nikls-droplet-1 su[19885]: pam_unix(su:session): session closed for user artifactory
Jul 20 23:56:18 nikls-droplet-1 systemd-logind[1395]: Removed session c140.
-- Subject: Session c140 has been terminated
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
--
-- A session with the ID c140 has been terminated.
I have tried deleting the .lck files in the derby folder because that seems to have solved the "Another instance of Derby may have already booted the database /var/opt/jfrog/artifactory/data/derby" issue for someone else before, but nothing changed for me. After the next start of artifactory the files were just back and the same error showed up in the log files.
Since the output of systemctl start artifactory.service complaints about a timeout I raised START_TMO in artifactory.default to 300, but still the same problem persists. I also raised START_TMO in a different file according to the first answer of this SO question.
I don't understand what is going on and would be very grateful for any help/advice.

OpenDaylight: No OpenFlow connection from Open vSwitch to Controller

No connection between the controller and open vswitch.
opendaylight-user#root>info
Karaf
Karaf version 4.0.10
Karaf home /opt/odl
Karaf base /opt/odl
OSGi Framework org.eclipse.osgi-3.10.101.v20150820-1432
JVM
Java Virtual Machine OpenJDK 64-Bit Server VM version 25.144-b01
Version 1.8.0_144
Vendor Oracle Corporation
Pid 4312
Uptime 19 hours 16 minutes
Total compile time 3 minutes
Threads
Live threads 218
Daemon threads 99
Peak 221
Total started 8170
Memory
Current heap size 621,509 kbytes
Maximum heap size 1,864,192 kbytes
Committed heap size 1,030,144 kbytes
Pending objects 0
Garbage collector Name = 'PS Scavenge', Collections = 533, Time = 12.404 seconds
Garbage collector Name = 'PS MarkSweep', Collections = 24, Time = 15.571 seconds
Classes
Current classes loaded 23,639
Total classes loaded 23,827
Total classes unloaded 188
Operating system
Name Linux version 3.10.0-514.21.1.el7.x86_64
Architecture amd64
Processors 4
opendaylight-user#root>
The OpenFlow Session stats show. The 10.10.10.10 is the IP address of the switch:
opendaylight-user#root>ofp:show-session-stats
SESSION : /10.10.10.10:35616
CONNECTION_DISCONNECTED_BY_DEVICE : 1
SESSION : /10.10.10.10:35592
CONNECTION_DISCONNECTED_BY_DEVICE : 1
SESSION : /10.10.10.10:35608
CONNECTION_DISCONNECTED_BY_DEVICE : 1
SESSION : /10.10.10.10:51110
CONNECTION_DISCONNECTED_BY_DEVICE : 1
SESSION : /10.10.10.10:35610
CONNECTION_DISCONNECTED_BY_DEVICE : 1
SESSION : /10.10.10.10:35612
CONNECTION_DISCONNECTED_BY_DEVICE : 1
SESSION : /10.10.10.10:35614
CONNECTION_DISCONNECTED_BY_DEVICE : 1
opendaylight-user#root>
The logs show the following lines:
2018-02-26 11:54:08,732 | DEBUG | pool-58-thread-1 | LLDPSpeaker | 296 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.5.1 | Sending LLDP frames to 0 ports...
2018-02-26 11:54:10,092 | DEBUG | ntLoopGroup-15-7 | nflowProtocolListenerInitialImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | echo request received: 0
2018-02-26 11:54:13,732 | DEBUG | pool-58-thread-1 | LLDPSpeaker | 296 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.5.1 | Sending LLDP frames to 0 ports...
2018-02-26 11:54:15,092 | DEBUG | ntLoopGroup-15-7 | nflowProtocolListenerInitialImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | echo request received: 0
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | FROM_SWITCH: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | FROM_SWITCH_TRANSLATE_IN_SUCCESS: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | FROM_SWITCH_TRANSLATE_OUT_SUCCESS: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | FROM_SWITCH_TRANSLATE_SRC_FAILURE: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | FROM_SWITCH_PACKET_IN_LIMIT_REACHED_AND_DROPPED: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | FROM_SWITCH_NOTIFICATION_REJECTED: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | FROM_SWITCH_PUBLISHED_SUCCESS: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | FROM_SWITCH_PUBLISHED_FAILURE: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | TO_SWITCH_ENTERED: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | TO_SWITCH_DISREGARDED: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | TO_SWITCH_RESERVATION_REJECTED: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | TO_SWITCH_READY_FOR_SUBMIT: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | TO_SWITCH_SUBMIT_SUCCESS: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | TO_SWITCH_SUBMIT_SUCCESS_NO_RESPONSE: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | TO_SWITCH_SUBMIT_FAILURE: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | TO_SWITCH_SUBMIT_ERROR: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | REQUEST_STACK_FREED: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | OFJ_BACKPRESSURE_ON: no activity detected
2018-02-26 11:54:16,073 | DEBUG | pool-75-thread-1 | MessageIntelligenceAgencyImpl | 305 - org.opendaylight.openflowplugin.impl - 0.5.1 | OFJ_BACKPRESSURE_OFF: no activity detected
No nodes in OpenFlow topology:
opendaylight-user#root>openflow:getallnodes
No node is connected yet
opendaylight-user#root>
On the wire, I see OpenFlow packets between the controller and the switch:
Frame 1: 74 bytes on wire (592 bits), 74 bytes captured (592 bits) on interface 0
Internet Protocol Version 4, Src: 10.10.10.10 (10.10.10.10), Dst: 10.10.10.20 (10.10.10.20)
Transmission Control Protocol, Src Port: 35338 (35338), Dst Port: openflow (6653), Seq: 1, Ack: 1, Len: 8
Openflow Protocol
Header
Version: 0x04
Type: Echo request (SM) - OFPT_ECHO_REQUEST (2)
Length: 8
Transaction ID: 0
Frame 2: 74 bytes on wire (592 bits), 74 bytes captured (592 bits) on interface 0
Internet Protocol Version 4, Src: 10.10.10.20 (10.10.10.20), Dst: 10.10.10.10 (10.10.10.10)
Transmission Control Protocol, Src Port: openflow (6653), Dst Port: 35338 (35338), Seq: 1, Ack: 9, Len: 8
Openflow Protocol
Header
Version: 0x04
Type: Echo reply (SM) - OFPT_ECHO_REPLY (3)
Length: 8
Transaction ID: 0
Frame 3: 66 bytes on wire (528 bits), 66 bytes captured (528 bits) on interface 0
Internet Protocol Version 4, Src: 10.10.10.10 (10.10.10.10), Dst: 10.10.10.20 (10.10.10.20)
Transmission Control Protocol, Src Port: 35338 (35338), Dst Port: openflow (6653), Seq: 9, Ack: 9, Len: 0
Flags: 0x010 (ACK)
000. .... .... = Reserved: Not set
...0 .... .... = Nonce: Not set
.... 0... .... = Congestion Window Reduced (CWR): Not set
.... .0.. .... = ECN-Echo: Not set
.... ..0. .... = Urgent: Not set
.... ...1 .... = Acknowledgment: Set
.... .... 0... = Push: Not set
.... .... .0.. = Reset: Not set
.... .... ..0. = Syn: Not set
.... .... ...0 = Fin: Not set
Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps
No-Operation (NOP)
Type: 1
0... .... = Copy on fragmentation: No
.00. .... = Class: Control (0)
...0 0001 = Number: No-Operation (NOP) (1)
Timestamps: TSval 140811003, TSecr 165720510
Kind: Timestamp (8)
Length: 10
Timestamp value: 140811003
Timestamp echo reply: 165720510
Now, on Open vSwitch side:
# ovs-vsctl show
934472aa-72ca-4f61-834c-a86bd41a9b27
Manager "tcp:10.10.10.20:6640"
is_connected: true
Bridge "vsw0"
Controller "tcp:10.10.10.20:6653"
is_connected: true
Port "339bfa2f6b7b4_l"
Interface "339bfa2f6b7b4_l"
Port "0de326ec0ace4_l"
Interface "0de326ec0ace4_l"
Port "vsw0"
Interface "vsw0"
type: internal
ovs_version: "2.9.0"
The controller goes from ACTIVE to IDLE:
# ovs-vsctl list controller
_uuid : ac3f5c94-624a-47ce-90c3-782060e4ec3c
connection_mode : []
controller_burst_limit: []
controller_rate_limit: []
enable_async_messages: []
external_ids : {}
inactivity_probe : []
is_connected : true
local_gateway : []
local_ip : []
local_netmask : []
max_backoff : []
other_config : {}
role : other
status : {sec_since_connect="42353", state=ACTIVE}
target : "tcp:10.10.10.20:6653"
The logs show the following.
2018-02-26T16:59:55.091Z|90890|vconn|DBG|tcp:10.10.10.20:6653: sent (Success): OFPT_ECHO_REQUEST (OF1.3) (xid=0x0): 0 bytes of payload
2018-02-26T16:59:55.092Z|90891|vconn|DBG|tcp:10.10.10.20:6653: received: OFPT_ECHO_REPLY (OF1.3) (xid=0x0): 0 bytes of payload
2018-02-26T16:59:55.092Z|90892|rconn|DBG|vsw0<->tcp:10.10.10.20:6653: entering ACTIVE
2018-02-26T17:00:00.091Z|90893|rconn|DBG|vsw0<->tcp:10.10.10.20:6653: idle 5 seconds, sending inactivity probe
2018-02-26T17:00:00.091Z|90894|rconn|DBG|vsw0<->tcp:10.10.10.20:6653: entering IDLE
Any ideas why the OF connection is not being formed?
The issue was that the bridge was not set with OpenFlow 1.3 protocol.
ovs-vsctl set bridge vsw0 protocols=OpenFlow13
Right after that, the session came up:
opendaylight-user#root>ofp:show-session-stats
SESSION : openflow:231865136161864
CONNECTION_CREATED : 1
opendaylight-user#root>
The controller is now "master":
# ovs-vsctl list controller
_uuid : ac3f5c94-624a-47ce-90c3-782060e4ec3c
connection_mode : []
controller_burst_limit: []
controller_rate_limit: []
enable_async_messages: []
external_ids : {}
inactivity_probe : []
is_connected : true
local_gateway : []
local_ip : []
local_netmask : []
max_backoff : []
other_config : {}
role : master
status : {sec_since_connect="116", state=ACTIVE}
target : "tcp:10.10.10.20:6653"

Resources