hyperledger configtxgen error - yaml

I am actually trying to build two blockchains on two diffents VPS. The first one is working but after many hours of research, i didn't find why the second blockchain don't want to build.
I built the crypto-config folder, it is OK, but when I try to build the channel-artifacts folder it is not working and I have exactly the same approach. Here is the log :
2018-07-05 17:05:43.046 CEST [common/tools/configtxgen] main -> WARN 001 Omitting the channel ID for configtxgen is deprecated. Explicitly passing the channel ID will be required in the future, defaulting to 'testchainid'.
2018-07-05 17:05:43.046 CEST [common/tools/configtxgen] main -> INFO 002 Loading configuration
2018-07-05 17:05:43.046 CEST [common/tools/configtxgen/localconfig] Load -> CRIT 003 Error reading configuration: While parsing config: yaml: unknown anchor 'ChannelCapabilities' referenced
2018-07-05 17:05:43.047 CEST [common/tools/configtxgen] func1 -> CRIT 004 Error reading configuration: While parsing config: yaml: unknown anchor 'ChannelCapabilities' referenced
panic: Error reading configuration: While parsing config: yaml: unknown anchor 'ChannelCapabilities' referenced [recovered]
panic: Error reading configuration: While parsing config: yaml: unknown anchor 'ChannelCapabilities' referenced
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panic(0xc420199e00, 0xc420414390, 0x1, 0x1)
/w/workspace/fabric-nightly-release-job-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:188 +0xbd
main.main.func1()
/w/workspace/fabric-nightly-release-job-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/main.go:254 +0x1ae
panic(0xc6ed20, 0xc420414380)
/opt/go/go1.10.linux.amd64/src/runtime/panic.go:505 +0x229
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panic(0xc420199c50, 0xc4201916a0, 0x2, 0x2)
/w/workspace/fabric-nightly-release-job-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:188 +0xbd
github.com/hyperledger/fabric/common/tools/configtxgen/localconfig.Load(0x7ffdd627483b, 0x15, 0x0, 0x0, 0x0, 0x1)
/w/workspace/fabric-nightly-release-job-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/localconfig/config.go:277 +0x469
main.main()
/w/workspace/fabric-nightly-release-job-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/main.go:265 +0xce7
My configtx.yaml file is basically the same as the first-network with just the paths changed.
Any help?

This seems to be related to the 1.2.0 release. I was able to get CLI running again by downgrading to 1.1.0 (hyperledger/fabric-ca-tools:x86_64-1.1.0).
Ref: https://hub.docker.com/r/hyperledger/fabric-ca-tools/tags/
Edit: https://github.com/hyperledger/fabric/releases/tag/v1.2.0
My fix was to make sure the Organizations section is at the top. I think all you need to do is move the section containing &ChannelCapabilities higher in your configtx.yaml.

Related

Docker Desktop not starting possible elevated privileges issue

When I launch Docker Desktop. I cannot see it in the taskbar or the system tray.
I can see it running as a background process in task manager.
I try to run the diagnose it is
[2022-11-11T12:05:32.437862000Z][com.docker.diagnose.exe][I] set path configuration to OnHost
Starting diagnostics
[PASS] DD0027: is there available disk space on the host?
[SKIP] DD0028: is there available VM disk space?
[PASS] DD0002: does the bootloader have virtualization enabled?
[PASS] DD0018: does the host support virtualization?
[PASS] DD0001: is the application running?
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x1c1 pc=0x15ff2f1]
goroutine 1 [running]:
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.findStringInKMSG({0xc000629140?, 0xc000438680?})
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/vm.go:54 +0x51
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.vmStartWorks()
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/vm.go:40 +0x25
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.(*test).GetResult(0x1d43f60)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/test.go:46 +0x43
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.Run.func1(0x1d43f60)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:17 +0x5a
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.walkOnce.func1(0x6?, 0x1d43f60)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:142 +0x77
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.walkDepthFirst(0x5, 0x1d43f60, 0xc00061f728)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:151 +0x87
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.walkDepthFirst(0x4, 0x1d43fe0, 0xc00061f728)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:148 +0x52
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.walkDepthFirst(0x3, 0x1d440e0, 0xc00061f728)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:148 +0x52
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.walkDepthFirst(0x2, 0x1d44160, 0xc00061f728)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:148 +0x52
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.walkDepthFirst(0x1, 0x1d441e0, 0xc00061f728)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:148 +0x52
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.walkDepthFirst(0x0, 0x1d44960, 0xc00061f728)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:148 +0x52
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.walkOnce(0x16e29c0?, 0xc00035f890)
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:137 +0xcc
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose.Run(0x1d44960, 0xaa0fd02800000010?, {0xc00035fb20, 0x1, 0x1})
github.com/docker/pinata/common/pkg/diagkit/gather/diagnose/run.go:16 +0x1d4
main.checkCmd({0xc0000743d0?, 0xc0000743d0?, 0x4?}, {0x0, 0x0})
github.com/docker/pinata/common/cmd/com.docker.diagnose/main.go:133 +0x105
main.main()
github.com/docker/pinata/common/cmd/com.docker.diagnose/main.go:99 +0x287
also the docker client can't connect
PS C:\Program Files\Docker\Docker\resources> docker version
error during connect: In the default daemon configuration on Windows, the docker client must be run with elevated privileges to connect.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/version": open //./pipe/docker_engine: The system cannot find the file specified.
Client:
Cloud integration: v1.0.29
Version: 20.10.21
API version: 1.41
Go version: go1.18.7
Git commit: baeda1f
Built: Tue Oct 25 18:08:16 2022
OS/Arch: windows/amd64
Context: default
Experimental: true
I tried to run:
$ & 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchDaemon
from a different stack overflow answer but didn't work.

SharePoint Sample WebPart build errors

Good Evening i get the following error when i try to build a sample webpart quite lost now very new to this
\react-tiles-v2> gulp serve
Build target: DEBUG
[19:58:28] Using gulpfile ~\Downloads\sp-dev-fx-webparts-main\sp-dev-fx-webparts-main\samples\react-tiles-v2\gulpfile.js
[19:58:28] Starting gulp
[19:58:28] Starting 'serve'...
[19:58:28] Starting subtask 'configure-sp-build-rig'...
[19:58:28] Finished subtask 'configure-sp-build-rig' after 4.94 ms
[19:58:28] Starting subtask 'spfx-serve'...
[19:58:28] [spfx-serve] To load your scripts, use this query string: ?debug=true&noredir=true&debugManifestsFile=https://localhost:4321/temp/manifests.js
[19:58:28] Starting server...
Starting api server on port 5432.
Registring api: /workbench
Registring api: /
[19:58:29] Finished subtask 'spfx-serve' after 305 ms
[19:58:29] Starting subtask 'pre-copy'...
[19:58:29] Finished subtask 'pre-copy' after 12 ms
[19:58:29] Starting subtask 'copy-static-assets'...
[19:58:29] Starting subtask 'sass'...
[19:58:29] Server started https://localhost:4321
[19:58:29] LiveReload started on port 35729
[19:58:29] Running server
[19:58:29] Opening https://iococlouddev.sharepoint.com/sites/STD_DEV/_layouts/15/workbench.aspx using the default OS app
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist#latest --update-db
Why you should do it regularly:
https://github.com/browserslist/browserslist#browsers-data-updating
[19:58:29] Finished subtask 'copy-static-assets' after 390 ms
[19:58:29] Finished subtask 'sass' after 656 ms
[19:58:29] Starting subtask 'tslint'...
[19:58:30] [tslint] tslint version: 5.12.1
[19:58:30] Starting subtask 'tsc'...
[19:58:30] [tsc] typescript version: 3.3.4000
[19:58:40] Error - [tsc] src/webparts/Tiles/TilesWebPart.ts(39,61): error TS2345: Argument of type 'import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/node_modules/#microsoft/sp-http/node_modules/#microsoft/sp-core-library/dist/index-internal").ServiceKey<import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/node...' is not assignable to parameter of type 'import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/node_modules/#microsoft/sp-core-library/dist/index-internal").ServiceKey<import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/node_modules/#microsoft/sp-component...'.
[19:58:40] [tsc] Types of property 'defaultCreator' are incompatible.
[19:58:40] [tsc] Type 'import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/node_modules/#microsoft/sp-http/node_modules/#microsoft/sp-core-library/dist/index-internal").ServiceCreator<import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/...' is not assignable to type 'import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/node_modules/#microsoft/sp-core-library/dist/index-internal").ServiceCreator<import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/node_modules/#microsoft/sp-compo...'.
[19:58:40] [tsc] Types of parameters 'serviceScope' and 'serviceScope' are incompatible.
[19:58:40] [tsc] Type 'import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/node_modules/#microsoft/sp-core-library/dist/index-internal").ServiceScope' is not assignable to type 'import("C:/Users/admin/Downloads/sp-dev-fx-webparts-main/sp-dev-fx-webparts-main/samples/react-tiles-v2/node_modules/#microsoft/sp-http/node_modules/#microsoft/sp-core-library/dist/index-internal").ServiceScope'.
[19:58:40] [tsc] Types have separate declarations of a private property '_registrations'.
[19:58:40] Error - 'tsc' sub task errored after 9.85 s
exited with code 2
[19:58:42] Finished subtask 'tslint' after 13 s
Request: '/temp/manifests.js'

Segmentation Fault: Running nvidia deepstream 5.0 SDK on Ubuntu

Trying to run nvidia’s deepstream5.0 sdk (sample program) on ubuntu 18.04 by following the document (DeepStream Development Guide — DeepStream DeepStream Version: 5.0 documentation).
Hardware Platform (Jetson / GPU)=GPU NVIDIA GEFORCE RTX 2060
TensorRT Version=7.0
NVIDIA GPU Driver Version (valid for GPU only):450.102
Issue Type( questions, new requirements, bugs)=bugs
GCC=7.5
PYTHON 3.7
CUDNN 7.6.5
CUDA 10.2
The application is installed in the path: “/opt/nvidia/deepstream/deepstream-5.0/”.
The execution command is "deepstream-app -c "
Example:
deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt
However got segmentation fault just after opening a blank screen and closing suddenly
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:01.788894483 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 6]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:01.788911328 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 6]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:01.788917862 9829 0x5594636fc490 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 6]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1495 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine opened error
0:00:11.045161759 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1743> [UID = 6]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 20x1x1
0:00:11.054222978 9829 0x5594636fc490 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_2> [UID 6]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_secondary_carmake.txt sucessfully
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:11.054352982 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 5]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:11.054360902 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 5]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:11.054365641 9829 0x5594636fc490 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 5]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1495 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine opened error
0:00:19.492522201 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1743> [UID = 5]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 12x1x1
0:00:19.497783953 9829 0x5594636fc490 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_1> [UID 5]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt sucessfully
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:19.497944601 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 4]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:19.497954066 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 4]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:19.497959157 9829 0x5594636fc490 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 4]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1495 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine opened error
0:00:27.394531547 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1743> [UID = 4]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 6x1x1
0:00:27.401846636 9829 0x5594636fc490 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 4]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine open error
0:00:27.405130601 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine failed
0:00:27.405139410 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine failed, try rebuild
0:00:27.405144384 9829 0x5594636fc490 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1495 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine opened error
0:00:32.442386732 9829 0x5594636fc490 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1743> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:32.447113083 9829 0x5594636fc490 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg) FPS 1 (Avg) FPS 2 (Avg) FPS 3 (Avg)
**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
** INFO: <bus_callback:181>: Pipeline ready
** INFO: <bus_callback:167>: Pipeline running
Segmentation fault (core dumped)
My nvidia driver and cuda version shown below:
My nvidia driver and cuda version shown below:
A bit late with the answer.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine open error
The error message already give you the clue that is pointing to an engine file that does not exist on the path. Probably provide the full path to the engine file in the config file.

How to dump goroutines stack race of the running kubelet

kubernetes is complicated, kubelet run into deadlocks after long running in some scenarios.
Is there a way to dump goroutine stack trace of the running kubelet?
The expected output like following which is very helpful to debug deadlock kind issues of kubelet.
goroutine 386 [chan send, 1140 minutes]:
k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).relist(0xc42069ea20)
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:261 +0x74e
k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).(k8s.io/kubernetes/pkg/kubelet/pleg.relist)-fm()
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:130 +0x2a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4212ee520)
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc4212ee520, 0x3b9aca00, 0x0, 0x1, 0xc420056540)
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4212ee520, 0x3b9aca00, 0xc420056540)
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).Start
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:130 +0x88
...
goroutine 309 [sleep]:
time.Sleep(0x12a05f200)
/usr/local/go/src/runtime/time.go:102 +0x166
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).syncLoop(0xc4205e3b00, 0xc420ff2780, 0x3e56a60, 0xc4205e3b00)
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1777 +0x1e7
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run(0xc4205e3b00, 0xc420ff2780)
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1396 +0x27f
k8s.io/kubernetes/cmd/kubelet/app.startKubelet.func1()
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:998 +0x67
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc42105dfb0)
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc42105dfb0, 0x0, 0x0, 0x1, 0xc420056540)
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc42105dfb0, 0x0, 0xc420056540)
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by k8s.io/kubernetes/cmd/kubelet/app.startKubelet
/workspace/anago-v1.11.5-beta.0.24+753b2dbc622f5c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:996 +0xea
...
I appreciate that anyone could share the experience about how to dump goroutines stack race of kubelet that something like what docker provided[1]
$ pkill -SIGUSR1 dockerd
[1]. https://success.docker.com/article/how-to-dump-goroutines-stacktraces
pprof, it will keep kubelet running
install go on node-x
run “kubectl proxy” in one terminal
curl http://localhost:8001/api/v1/proxy/nodes/node-x/debug/pprof/goroutine?debug=2
notes: API changed for different k8s versions, for 1.16 it's
curl http://127.0.0.1:8001/api/v1/nodes/node-**/proxy/debug/pprof/goroutine?debug=2
send signal to kubelet which caused kubelet to exit with a stack dump
kill -SIGABRT

why does fluentd is not starting?

I tried as per your document, But I am getting this error
2019-02-12 21:17:32 +0530 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ObsoletedParameterError error="'host' parameter is already removed: Use section instead."
what it means?

Resources