google app engine (GAE) ERROR: chown: cannot access '/app/public': No such file or directory - laravel

Getting this error form first "gcloud deploy". Project is Laravel app. I've replaces production info with some dummy info in all logs and files.
I've red some older post with the same issue but no response. ¿Anyone having same issue?. I've tested with a new laravel 5.7 installation and getting same error. I've deleted the whole project and created a new GAE project. Don't know what else to try. Also I've tested with a Symfony 4 framework project and same problem.
//file: app.yaml
service: api-dev
runtime: php
env: flex
runtime_config:
document_root: public
front_controller_file: index.php
skip_lockdown_document_root: true #<= this does the trick!!??
#enable_stackdriver_integration: true
# Ensure we skip ".env", which is only for local development
skip_files:
- .env
- .git
env_variables:
# Put production environment variables here.
APP_LOG: errorlog
APP_ENV: local
APP_DEBUG: true
APP_KEY: base64:SECRET
STORAGE_DIR: /tmp
## Set these environment variables according to your CloudSQL configuration.
DB_HOST: localhost
DB_DATABASE: api_db
DB_USERNAME: api_user
DB_PASSWORD: api_pwd
DB_SOCKET: "/cloudsql/YOUR_INSTANCE_NAME"
beta_settings:
# for Cloud SQL, set this value to the Cloud SQL connection name,
# e.g. "project:region:cloudsql-instance"
cloud_sql_instances: "YOUR_INSTANCE_NAME"
My Laravel project composer.json
//composer.json
{
"name": "laravel/laravel",
"description": "The Laravel Framework.",
"keywords": [
"framework",
"laravel"
],
"license": "MIT",
"type": "project",
"require": {
"php": ">=5.5.9",
"laravel/framework": "5.2.*",
"tymon/jwt-auth": "0.5.*",
"phpoffice/phpexcel": "1.8.x-dev",
"maatwebsite/excel": "~2.1.0",
"barryvdh/laravel-dompdf": "0.6.*",
"vsmoraes/laravel-pdf": "^1.0",
"laravelcollective/html": "~5.0",
"barryvdh/laravel-snappy": "^0.3.3",
"inacho/php-credit-card-validator": "1.*",
"laravel/envoy": "^1.4"
},
"require-dev": {
"fzaninotto/faker": "~1.4",
"mockery/mockery": "0.9.*",
"phpunit/phpunit": "~4.0",
"symfony/css-selector": "2.8.*|3.0.*",
"symfony/dom-crawler": "2.8.*|3.0.*"
},
"autoload": {
"classmap": [
"database"
],
"psr-4": {
"App\\": "app/"
}
},
"autoload-dev": {
"classmap": [
"tests/TestCase.php"
]
},
"scripts": {
"post-root-package-install": [
"php -r \"copy('.env.example', '.env');\""
],
"post-create-project-cmd": [
"php artisan key:generate"
],
"post-install-cmd": [
"chmod -R 755 bootstrap\\\/cache",
"php artisan cache:clear"
],
"pre-update-cmd": [
"php artisan clear-compiled"
],
"post-update-cmd": [
"php artisan optimize"
]
},
"config": {
"preferred-install": "dist",
"sort-packages": true,
"optimize-autoloader": true
},
"minimum-stability": "dev",
"prefer-stable": true
}
And here is the full output of the deploy command:
'gcloud app deploy deploy/dev.yaml --verbosity=debug'
DEBUG: No bucket specified, retrieving default bucket.
DEBUG: Using bucket [gs://staging.ibizi-dev.appspot.com].
DEBUG: Service [appengineflex.googleapis.com] is already enabled for project [ibizi-dev]
Beginning deployment of service [default]...
INFO: Need Dockerfile to be generated for runtime php
Building and pushing image for service [default]
INFO: Uploading [/var/folders/yh/t3v72_f96_9gshbgykw2yp6w0000gn/T/tmpk_BvPn/src.tgz] to [staging.ibizi-dev.appspot.com/eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest]
DEBUG: Using runtime builder root [gs://runtime-builders/]
DEBUG: Loading runtimes manifest from [gs://runtime-builders/runtimes.yaml]
INFO: Reading [<googlecloudsdk.api_lib.storage.storage_util.ObjectReference object at 0x110774650>]
DEBUG: Resolved runtime [php] as build configuration [gs://runtime-builders/php-default-builder-20180926113101.yaml]
INFO: Using runtime builder [gs://runtime-builders/php-default-builder-20180926113101.yaml]
INFO: Reading [<googlecloudsdk.api_lib.storage.storage_util.ObjectReference object at 0x11078d150>]
Started cloud build [c874e7e6-d67b-443b-9cc0-169aa6f4dd58].
DEBUG: GCS logfile url is https://www.googleapis.com/storage/v1/b/staging.ibizi-dev.appspot.com/o/log-c874e7e6-d67b-443b-9cc0-169aa6f4dd58.txt?alt=media
To see logs in the Cloud Console: https://console.cloud.google.com/gcr/builds/c874e7e6-d67b-443b-9cc0-169aa6f4dd58?project=702099999392
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 404 (no log yet; keep polling)
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 205 bytes)
---------------------------------------------------------------------------------------------------------------------------------- REMOTE BUILD OUTPUT -----------------------------------------------------------------------------------------------------------------------------------
starting build "c874e7e6-d67b-443b-9cc0-169aa6f4dd58"
FETCHSOURCE
Fetching storage object: gs://staging.ibizi-dev.appspot.com/eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest#1540214682717948
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 529 bytes)
Copying gs://staging.ibizi-dev.appspot.com/eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest#1540214682717948...
- [1 files][ 798.0 B/ 798.0 B]
Operation completed over 1 objects/798.0 B.
BUILD
Starting Step #0
Step #0: Pulling image: gcr.io/gcp-runtimes/php/gen-dockerfile#sha256:2528f753fe7726eb4068d7020d11ecc97216b3ab9e4cb7728edda98cd61b410c
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 1446 bytes)
Step #0: sha256:2528f753fe7726eb4068d7020d11ecc97216b3ab9e4cb7728edda98cd61b410c: Pulling from gcp-runtimes/php/gen-dockerfile
Step #0: Digest: sha256:2528f753fe7726eb4068d7020d11ecc97216b3ab9e4cb7728edda98cd61b410c
Step #0: Status: Downloaded newer image for gcr.io/gcp-runtimes/php/gen-dockerfile#sha256:2528f753fe7726eb4068d7020d11ecc97216b3ab9e4cb7728edda98cd61b410c
Step #0: + php /builder/create_dockerfile.php create --php72-image gcr.io/google-appengine/php72#sha256:ab499cb6f2419351ee7db259ae88721f9861935659b42007727395b80226a809 --php71-image gcr.io/google-appengine/php71#sha256:d5dbccb1e6dcc6d26c2df23c464f191ea10ef2bf8e9e18e6d13df3c6770b92a1 --php70-image gcr.io/google-appengine/php70#sha256:cf215595a9d4540762724721ab19837abb9af43645963fa0bd29cc31a2960529 --php56-image gcr.io/google-appengine/php56#sha256:0e57acbab18ce2dba8142dff708157ffdacdefbbdfa480d9068382431fd60fb5
Step #0:
Step #0: There is no PHP runtime version specified in composer.json, or
Step #0: we don't support the version you specified. Google App Engine
Step #0: uses the latest 7.2.x version.
Step #0: We recommend pinning your PHP version by running:
Step #0:
Step #0: composer require php 7.2.* (replace it with your desired minor version)
Step #0:
Step #0: Using PHP version 7.2.x...
Step #0:
Finished Step #0
Starting Step #1
Step #1: Pulling image: gcr.io/cloud-builders/docker#sha256:ae2ac38e0aba542add006c47eb4a5820b819f9fa74ada0673d2910387e0f1c0e
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 354 bytes)
Step #1: sha256:ae2ac38e0aba542add006c47eb4a5820b819f9fa74ada0673d2910387e0f1c0e: Pulling from cloud-builders/docker
Step #1: e5c573070776: Already exists
Step #1: a7e8e7eaedca: Already exists
Step #1: 3c2cba919283: Already exists
Step #1: 2e2f0b8e1d16: Pulling fs layer
Step #1: 2e2f0b8e1d16: Verifying Checksum
Step #1: 2e2f0b8e1d16: Download complete
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 1636 bytes)
Step #1: 2e2f0b8e1d16: Pull complete
Step #1: Digest: sha256:ae2ac38e0aba542add006c47eb4a5820b819f9fa74ada0673d2910387e0f1c0e
Step #1: Status: Downloaded newer image for gcr.io/cloud-builders/docker#sha256:ae2ac38e0aba542add006c47eb4a5820b819f9fa74ada0673d2910387e0f1c0e
Step #1: Sending build context to Docker daemon 5.632kB
Step #1: Step 1/9 : FROM gcr.io/google-appengine/php72#sha256:ab499cb6f2419351ee7db259ae88721f9861935659b42007727395b80226a809
Step #1: sha256:ab499cb6f2419351ee7db259ae88721f9861935659b42007727395b80226a809: Pulling from google-appengine/php72
Step #1: Digest: sha256:ab499cb6f2419351ee7db259ae88721f9861935659b42007727395b80226a809
Step #1: Status: Downloaded newer image for gcr.io/google-appengine/php72#sha256:ab499cb6f2419351ee7db259ae88721f9861935659b42007727395b80226a809
Step #1: ---> 66030c9a7cc4
Step #1: Step 2/9 : ENV DOCUMENT_ROOT='/app/public' ENABLE_STACKDRIVER_INTEGRATION='1' LOG_CHANNEL='stackdriver' APP_LOG='errorlog' APP_ENV='demo' APP_DEBUG='' APP_KEY='base64:+9Lz6fgTo+I5NtJgwNctYuFaaDBNdiL1OpNlhzdPiXs=' STORAGE_DIR='/tmp' DB_HOST='localhost' DB_DATABASE='pms_dev' DB_USERNAME='root' DB_PASSWORD='TMvsds666' DB_SOCKET='/cloudsql/ibizi-dev:europe-west1:pmsapi-dev' FRONT_CONTROLLER_FILE='index.php' COMPOSER_FLAGS='--no-dev --prefer-dist' DETECTED_PHP_VERSION='7.2' IS_BATCH_DAEMON_RUNNING='true'
Step #1: ---> Running in 7594ef12b822
Step #1: Removing intermediate container 7594ef12b822
Step #1: ---> 142b1c897658
Step #1: Step 3/9 : COPY . $APP_DIR
Step #1: ---> f981d2d61e5d
Step #1: Step 4/9 : RUN chown -R www-data.www-data $APP_DIR
Step #1: ---> Running in 2acb68030e93
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 363 bytes)
Step #1: Removing intermediate container 2acb68030e93
Step #1: ---> 4725933d3fe3
Step #1: Step 5/9 : RUN /build-scripts/composer.sh
Step #1: ---> Running in d754ce7f2927
Step #1: Removing intermediate container d754ce7f2927
Step #1: ---> 3f19b1e63bed
Step #1: Step 6/9 : RUN /bin/bash /build-scripts/move-config-files.sh
Step #1: ---> Running in 10b9531680b7
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 975 bytes)
Step #1: Moving user supplied config files...
Step #1: Removing intermediate container 10b9531680b7
Step #1: ---> 45771fd2253a
Step #1: Step 7/9 : RUN /usr/sbin/nginx -t -c /etc/nginx/nginx.conf
Step #1: ---> Running in 67614392383d
Step #1: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Step #1: nginx: configuration file /etc/nginx/nginx.conf test is successful
Step #1: Removing intermediate container 67614392383d
Step #1: ---> 6d3a14951445
Step #1: Step 8/9 : RUN /bin/bash /build-scripts/lockdown.sh
Step #1: ---> Running in 706cf97d0d6d
Step #1: chown: cannot access '/app/public': No such file or directory
Step #1: Locking down the document root...
Step #1: The command '/bin/sh -c /bin/bash /build-scripts/lockdown.sh' returned a non-zero code: 1
Finished Step #1
ERROR
ERROR: build step 1 "gcr.io/cloud-builders/docker#sha256:ae2ac38e0aba542add006c47eb4a5820b819f9fa74ada0673d2910387e0f1c0e" failed: exit status 1
DEBUG: Operation [operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4] complete. Result: {
"metadata": {
"#type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
"build": {
"finishTime": "2018-10-22T13:25:11.466503939Z",
"status": "FAILURE",
"timeout": "600s",
"startTime": "2018-10-22T13:24:45.495313971Z",
"artifacts": {
"images": [
"eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest"
]
},
"logsBucket": "staging.ibizi-dev.appspot.com",
"results": {
"buildStepImages": [
"",
""
]
},
"id": "c874e7e6-d67b-443b-9cc0-169aa6f4dd58",
"timing": {
"FETCHSOURCE": {
"endTime": "2018-10-22T13:24:51.044559960Z",
"startTime": "2018-10-22T13:24:47.603211198Z"
},
"BUILD": {
"endTime": "2018-10-22T13:25:10.480688954Z",
"startTime": "2018-10-22T13:24:51.120689972Z"
}
},
"source": {
"storageSource": {
"object": "eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest",
"bucket": "staging.ibizi-dev.appspot.com"
}
},
"options": {
"substitutionOption": "ALLOW_LOOSE",
"logging": "LEGACY"
},
"steps": [
{
"status": "SUCCESS",
"name": "gcr.io/gcp-runtimes/php/gen-dockerfile#sha256:2528f753fe7726eb4068d7020d11ecc97216b3ab9e4cb7728edda98cd61b410c",
"args": [
"--php72-image",
"gcr.io/google-appengine/php72#sha256:ab499cb6f2419351ee7db259ae88721f9861935659b42007727395b80226a809",
"--php71-image",
"gcr.io/google-appengine/php71#sha256:d5dbccb1e6dcc6d26c2df23c464f191ea10ef2bf8e9e18e6d13df3c6770b92a1",
"--php70-image",
"gcr.io/google-appengine/php70#sha256:cf215595a9d4540762724721ab19837abb9af43645963fa0bd29cc31a2960529",
"--php56-image",
"gcr.io/google-appengine/php56#sha256:0e57acbab18ce2dba8142dff708157ffdacdefbbdfa480d9068382431fd60fb5"
],
"env": [
"GAE_APPLICATION_YAML_PATH=dev.yaml"
],
"timing": {
"endTime": "2018-10-22T13:24:53.076290422Z",
"startTime": "2018-10-22T13:24:51.120719923Z"
},
"pullTiming": {
"endTime": "2018-10-22T13:24:51.939222557Z",
"startTime": "2018-10-22T13:24:51.120719923Z"
}
},
{
"status": "FAILURE",
"name": "gcr.io/cloud-builders/docker#sha256:ae2ac38e0aba542add006c47eb4a5820b819f9fa74ada0673d2910387e0f1c0e",
"args": [
"build",
"-t",
"eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest",
"--network=cloudbuild",
"."
],
"env": [
"GAE_APPLICATION_YAML_PATH=dev.yaml"
],
"timing": {
"endTime": "2018-10-22T13:25:10.374276995Z",
"startTime": "2018-10-22T13:24:53.076303098Z"
},
"pullTiming": {
"endTime": "2018-10-22T13:25:06.452731685Z",
"startTime": "2018-10-22T13:24:53.076303098Z"
}
}
],
"sourceProvenance": {
"resolvedStorageSource": {
"generation": "1540214682717948",
"object": "eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest",
"bucket": "staging.ibizi-dev.appspot.com"
},
"fileHashes": {
"gs://staging.ibizi-dev.appspot.com/eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest#1540214682717948": {}
}
},
"projectId": "ibizi-dev",
"images": [
"eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest"
],
"substitutions": {
"_GAE_APPLICATION_YAML_PATH": "dev.yaml",
"_OUTPUT_IMAGE": "eu.gcr.io/ibizi-dev/appengine/default.20181022t152439:latest"
},
"createTime": "2018-10-22T13:24:44.661298334Z",
"logUrl": "https://console.cloud.google.com/gcr/builds/c874e7e6-d67b-443b-9cc0-169aa6f4dd58?project=702099999392"
}
},
"done": true,
"name": "operations/build/ibizi-dev/Yzg3NGU3ZTYtZDY3Yi00NDNiLTljYzAtMTY5YWE2ZjRkZDU4",
"error": {
"message": "Build failed; check build logs for details",
"code": 2
}
}
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
DEBUG: (gcloud.app.deploy) Cloud build failed. Check logs at https://console.cloud.google.com/gcr/builds/c874e7e6-d67b-443b-9cc0-169aa6f4dd58?project=702099999392 Failure status: UNKNOWN: Error Response: [2] Build failed; check build logs for details
Traceback (most recent call last):
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 841, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 770, in Run
resources = command_instance.Run(args)
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/surface/app/deploy.py", line 90, in Run
parallel_build=False)
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 620, in RunDeploy
flex_image_build_option=flex_image_build_option)
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 405, in Deploy
image, code_bucket_ref, gcr_domain, flex_image_build_option)
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 282, in _PossiblyBuildAndPush
self.deploy_options.parallel_build)
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/deploy_command_util.py", line 449, in BuildAndPushDockerImage
return _SubmitBuild(build, image, project, parallel_build)
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/deploy_command_util.py", line 482, in _SubmitBuild
build, project=project)
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/cloudbuild/build.py", line 150, in ExecuteCloudBuild
self.WaitAndStreamLogs(build_op)
File "/Users/davidpv/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/cloudbuild/build.py", line 195, in WaitAndStreamLogs
+ message)
BuildFailedError: Cloud build failed. Check logs at https://console.cloud.google.com/gcr/builds/c874e7e6-d67b-443b-9cc0-169aa6f4dd58?project=702099999392 Failure status: UNKNOWN: Error Response: [2] Build failed; check build logs for details
ERROR: (gcloud.app.deploy) Cloud build failed. Check logs at https://console.cloud.google.com/gcr/builds/c874e7e6-d67b-443b-9cc0-169aa6f4dd58?project=702099999392 Failure status: UNKNOWN: Error Response: [2] Build failed; check build logs for details

I just found that by adding "skip_lockdown_document_root: true" in the "runtime_config" node it does not fail anymore. Does anyone can explain for dummies what does this action? I haven't found a clear explanation. I'd like to know what's going on under the hood with this "skip_locakdown_document_root" option. I can guess that it does a chmod/chown a that's it, but I'd like to confirm this. Thanks

I had a similar issue, but it happened that I've changed the way I deployed project, and the path was no longer correct.
I used the mentioned "skip_lockdown_document_root" option to be able to start the VM, ssh into it and understand the problem. (I had error 500)

Related

Jupyterlab terminal not working kernel not loaded

I am deploying the jupyterlab on k8s, it uses below kernel.json and creates runtime connection file and load kernel once start the notebook, console and pyspark, however i didnt find runtime connection file when starting terminal.
for example if I load notebook then the runtime file created as below:
/opt/spark/work-dir/.local/share/jupyter/runtime/kernel-xxxxxxxxxxxx-4x9e-xxx5-b32f7xxxxxx.json
{
"shell_port": 37215,
"iopub_port": 59355,
"stdin_port": 34241,
"control_port": 38303,
"hb_port": 55995,
"ip": "127.0.0.1",
"key": "",
"transport": "tcp",
"signature_scheme": "hmac-sha256",
"kernel_name": ""
}
kernel.json
{
"display_name": "PySpark",
"language": "python",
"argv": ["/usr/bin/python", "-m", "ipykernel", "-f", "{connection_file}"],
"env": {
"SPARK_HOME": "/opt/spark",
"PYTHONPATH": "/opt/spark/python/lib/py4j-0.10.9.2-src.zip:/opt/spark/python/lib/pyspark.zip",
"PYTHONSTARTUP": "/opt/spark/work-dir/.local/share/jupyter/startup/pyspark/startup.py",
"SPARKSHELL": "/opt/spark/python/pyspark/shell.py",
"PYSPARK_SUBMIT_ARGS": "--conf spark.pyspark.python=/usr/bin/python pyspark-shell",
"PYSPARK_PYTHON": "/usr/bin/python",
"PYSPARK_DRIVER_PYTHON": "/usr/bin/python"
}
}
There is no error reported on log for this failure, I am only getting this message whenever loading the Terminal.
[I 2022-08-01 22:18:09.222 ServerApp] New terminal with automatic name: 1
[D 2022-08-01 22:18:09.223 ServerApp] 200 POST /addr/component/cluster/deployment/jupyter/api/terminals?1659385088215 (10.2xx.xxx.xx) 77.89ms

unable to control swarm ingress network with ansible

I'm deploying Docker swarm with ansible and I would like to ensure the ingress network has been created. In that aim, I configured the following task :
- name: Ensure ingress network exists
docker_network:
state: present
name: ingress
driver: overlay
driver_options:
ingress: true
And I'm getting the following error :
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: docker.errors.NotFound: 404 Client Error for http+docker://localhost/v1.41/networks/ingress/disconnect: Not Found ("No such container: ingress-endpoint")
fatal: [swarm-srv-1]: FAILED! => {"changed": false, "msg": "An unexpected docker error occurred: 404 Client Error for http+docker://localhost/v1.41/networks/ingress/disconnect: Not Found (\"No such container: ingress-endpoint\")"}
I've tried to add some arguments likes :
scope: swarm
force: yes
But no changes... I've also tried to delete the ingress with ansible (state: absent), but I always get the same error.
Note that I don't face any issue when trying to delete a recreate the ingress network manually on the swarm : docker network rm ingress
I don't know how to resolve that issue...Any help would be appreciated. Thanks !
Here are some informations that may help...
# docker version
Version: 20.10.6
API version: 1.41
Go version: go1.13.15
Git commit: 370c289
Built: Fri Apr 9 22:47:35 2021
OS/Arch: linux/amd64
# docker inspect ingress
[
{
"Name": "ingress",
"Id": "yb2tkhep8vtaj9q7w3mssc9lx",
"Created": "2021-05-19T05:53:27.524446929-04:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "dfdc0f123d21a196c7a815c7e0a886924d0799ae5f3be2d38b64d527ed4620b1",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "8f8932d6f99f",
"IP": "(ip address here)"
},
{
"Name": "28b9ca95dcf0",
"IP": "(ip address here)"
},
{
"Name": "f7c48c8af2f5",
"IP": "(ip address here)"
}
]
}
]
I had the exact same issue when trying to customize the IP range of the ingress network. It looks like the docker_network module does not support modification of swarm specific networks: there is a open Github issue for this.
I went for the ugly workaround of removing the network by executing it through a shell (docker network rm ingress command) and adding it again. When adding it with the docker_network module, I found that adding also seems not be working (fails to set the ingress property of the network). So I ended up doing both remove- and create operation through a shell command.
Since the removal will trigger a confirmation dialogue:
WARNING! Before removing the routing-mesh network, make sure all the nodes in your swarm run the same docker engine version. Otherwise, removal may not be effective and functionality of newly create ingress networks will be impaired.
Are you sure you want to continue? [y/N]
I used the expect module to confirm the dialogue:
- name: remove default ingress network
ansible.builtin.expect:
command: docker network rm ingress
responses:
"[y/N]": "y"
- name: create customized ingress network
shell: "docker network create --ingress --subnet {{ docker_ingress_network }} --driver overlay ingress"
It is not perfect but it works.
There was one last problem I experienced: when running it on an existing swarm I ended up having network issues on the node where I did run this (somehow the docker_gwbridge network on that node could not handle the change). The fix for this was to fully remove the node and re-join the swarm (regenerates the docker_gwbridge).

Mounting AWS EBS into CoreOS

I have launched an EC2 instance with 100Gb EBS as https://coreos.com/os/docs/latest/booting-on-ec2.html docs.
#cloud-config
coreos:
units:
- name: media-ephemeral.mount
command: start
content: |
[Mount]
What=/dev/xvdb
Where=/media/ephemeral
Type=ext4
- name: format-ephemeral.service
command: start
content: |
[Unit]
Description=Formats the ephemeral drive
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/wipefs -f /dev/xvdb
ExecStart=/usr/sbin/mkfs.btrfs -f /dev/xvdb
- name: var-lib-docker.mount
command: start
content: |
[Unit]
Description=Mount ephemeral to /var/lib/docker
Requires=format-ephemeral.service
After=format-ephemeral.service
Before=docker.service
[Mount]
What=/dev/xvdb
Where=/var/lib/docker
Type=btrfs
if i run the above, the EBS is mounted correctly, but on system reboot, the volume is is not persistent
using
storage:
filesystems:
- name: ephemeral1
mount:
device: /dev/xvdb
format: ext4
wipe_filesystem: true
systemd:
units:
- name: media-ephemeral.mount
enable: true
contents: |
[Unit]
Before=local-fs.target
[Mount]
What=/dev/xvdb
Where=/media/ephemeral
Type=ext4
[Install]
WantedBy=local-fs.target
- name: var-lib-docker.mount
enable: true
contents: |
[Unit]
Description=Mount ephemeral to /var/lib/docker
Before=local-fs.target
[Mount]
What=/dev/xvdb
Where=/var/lib/docker
Type=ext4
[Install]
WantedBy=local-fs.target
- name: docker.service
dropins:
- name: 10-wait-docker.conf
contents: |
[Unit]
After=var-lib-docker.mount
Requires=var-lib-docker.mount
as per docs, i get
core#ip-10-1-2-188 ~ $ sudo /usr/bin/coreos-cloudinit --from-file storage1.conf
2019/01/15 17:09:28 Checking availability of "local-file"
2019/01/15 17:09:28 Fetching user-data from datasource of type "local-file"
2019/01/15 17:09:28 line 2: warning: unrecognized key "storage"
2019/01/15 17:09:28 line 9: warning: unrecognized key "systemd"
2019/01/15 17:09:28 Fetching meta-data from datasource of type "local-file"
2019/01/15 17:09:28 Parsing user-data as cloud-config
2019/01/15 17:09:28 Merging cloud-config from meta-data and user-data
2019/01/15 17:09:28 Updated /etc/environment
2019/01/15 17:09:28 Ensuring runtime unit file "etcd.service" is unmasked
2019/01/15 17:09:28 Ensuring runtime unit file "etcd2.service" is unmasked
2019/01/15 17:09:28 Ensuring runtime unit file "fleet.service" is unmasked
2019/01/15 17:09:28 Ensuring runtime unit file "locksmithd.service" is unmasked
core#ip-10-1-2-188 ~ $ cat /etc/os-release
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1967.3.0
VERSION_ID=1967.3.0
BUILD_ID=2019-01-08-0044
PRETTY_NAME="Container Linux by CoreOS 1967.3.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
What is the correct way to mount the EBS volume on CoreOS?
Any advice is much appreciated
It looks like you missed a step. [cloud-configs have been deprecated for quite some time now. You correctly converted that cloud-config into a container linux config (CLC) file, but missed using config transpiler (CT) to then render an ignition sequence. You can check this by running your config through the online validator. After running that CLC config through the config transpiler I get the following, which validates correctly:
{
"ignition": {
"config": {},
"timeouts": {},
"version": "2.1.0"
},
"networkd": {},
"passwd": {},
"storage": {
"filesystems": [
{
"mount": {
"device": "/dev/xvdb",
"format": "ext4",
"wipeFilesystem": true
},
"name": "ephemeral1"
}
]
},
"systemd": {
"units": [
{
"contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhat=/dev/xvdb\nWhere=/media/ephemeral\nType=ext4\n[Install]\nWantedBy=local-fs.target\n",
"enable": true,
"name": "media-ephemeral.mount"
},
{
"contents": "[Unit]\nDescription=Mount ephemeral to /var/lib/docker\nBefore=local-fs.target\n[Mount]\nWhat=/dev/xvdb\nWhere=/var/lib/docker\nType=ext4\n[Install]\nWantedBy=local-fs.target\n",
"enable": true,
"name": "var-lib-docker.mount"
},
{
"dropins": [
{
"contents": "[Unit]\nAfter=var-lib-docker.mount\nRequires=var-lib-docker.mount\n",
"name": "10-wait-docker.conf"
}
],
"name": "docker.service"
}
]
}
}
Additionally, it's important to note that there are other differences as well between ignition and coreos-cloud-init. The most important of which is that ignition only runs once. Thus, for things like wiping the contents of that ephemeral disk, you should not expect wipe_filesystem: true to be run every single boot.
Try booting the machine with this config instead. You should get the expected results.

new composer-wallet - jszip error

I am making a new composer-wallet with composer 0.19.0
All test passed fine - test based on composer-wallet-filesystem
I can successfully import business network cards to the new wallet and use them for transactions.
I am only one issue
$ composer card list
Error: Can't find end of central directory : is this a zip file ? If it is, see http://stuk.github.io/jszip/documentation/howto/read_zip.html
Command failed
I tryed to update jszip to the lastest version in composer-cli, but same problem
Here is the environment variable to configure the connection
export NODE_CONFIG='{
"composer": {
"wallet": {
"type": "composer-wallet-mongodb",
"desc": "Uses a local mongodb instance",
"options": {
"uri": "mongodb://localhost:27017/yourCollection",
"collectionName": "myWallet",
"options": {
}
}
}
}
}'
Any help is welcomed

msg": "failed to create temporary content file: timed out" ansible error

I am getting the following error:
"url": "http://redrockdigimark.com/apachemirror/kafka/0.10.2.0/kafka_2.10-0.10.2.0.tgz",
"url_password": "****",
"url_username": "***",
"use_proxy": true,
"validate_certs": true
},
"module_name": "get_url"
},
"msg": "failed to create temporary content file: timed out"
}
to retry, use: --limit #/etc/ansible/kafka_install.retry
As #marcv81 mentioned in his comment, you can try to increase the timeout for the get_url module (available since 1.8), as mentionned in the documentation
- name: Download file
get_url:
url: http://redrockdigimark.com/apachemirror/kafka/0.10.2.0/kafka_2.10-0.10.2.0.tgz
dest: /tmp
timeout: 300
The timeout value is for the request call, not for the entire download
What it feels like your task is getting timed out. You can increase the timeout in ansible config and rerun the play.
The default config file resides here /etc/ansible/ansible.cfg.
add the line below if it is not there otherwise edit it.
timeout = 60

Resources