I'm trying to automate vault v0.8.0 deployment (vaultproject from Hashicorp) with a consul v0.9.1 backend.
Because it is a trial and error process I need to run "vault init" a couple of times (until I get it right) and get the keys.
Unfortunately I lost the keys and the root token.
I tried to stop vault and consul service - nothing
"* Vault is already initialized" and "* Vault is sealed"
I stopped vault, removed the vault path from consul, started vault - same result - and at "vault init" I receive this error:
* expiration state restore failed: failed to scan for leases: list failed at path '': Unexpected response code: 403
and it's creating the vault/ path again in consul and remain sealed.
How can I "reset" vault or make it UN-initialized and start over with "vault init" ?
This is the log:
Aug 10 05:01:49 TSLASOWROMM01 vault[9156]: ==> Vault server started! Log data will stream in below:
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.238436 [INFO ] core: security barrier not initialized
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.271844 [INFO ] core: security barrier initialized: shares=5 threshold=3
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.320363 [INFO ] core: post-unseal setup starting
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.342931 [INFO ] core: loaded wrapping token key
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.356895 [INFO ] core: successfully mounted backend: type=generic path=secret/
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.357342 [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.357736 [INFO ] core: successfully mounted backend: type=system path=sys/
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.358293 [INFO ] rollback: starting rollback manager
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.381808 [INFO ] expiration: restoring leases
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.383943 [INFO ] core: pre-seal teardown starting
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.384154 [INFO ] core: cluster listeners not running
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.384365 [INFO ] rollback: stopping rollback manager
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.384633 [INFO ] core: pre-seal teardown complete
Aug 10 05:03:26 TSLASOWROMM01 vault[9156]: 2017/08/10 05:03:26.384909 [ERROR] core: post-unseal setup failed during init: error=expiration state restore failed: failed to scan for leases: list failed at path '': Unexpected response code: 403
Per the discussion of this same question here: https://groups.google.com/forum/#!msg/vault-tool/xuO8IInubDg/SBHMP2PKAwAJ, the answer is:
Vault is storing its state in Consul, so if you shut down Vault and delete Vault's key prefix in Consul things should start clean again.
Just in case someone reads this post with the same intention as I did -> looking for "file"-backend or "database"-backend
file backend:
If you look into the vault configuration file (e.g. /etc/vault.d/vault.hcl)
There is a directive storage "file" { path = "/some/file/name" ......
Just empty the directory /some/file/name (do not remove, just emtpy).
database backend:
you just have to truncate the "vault_kv_store" table and restart vault:
psql -U myvaultdbuser -h myvaultDB.host.name -p5432 vaultdatabasname -c 'truncate table vault_kv_store';
... and to initialize again:
Then direct your Browser to e.g. http://localhost:8820/ui/vault/init to initialize it again
With any storage backend of Vault you should be able to just delete your storage. Looks like you were running into an bug with that older version of Consul.
With vault version Vault v1.7.3. I noticed it created a folder vault-data and I had to rename it to un-initialize.
Related
I am installing latest minio on ubuntu 18.04 following the minio installation instruction from here.
after the installation, try to run it with sudo systemctl start minio.service
but it didn't work with message.
...skipping...
● minio.service - MinIO
Loaded: loaded (/etc/systemd/system/minio.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2022-12-08 17:03:45 CST; 2min 1s ago
Docs: https://docs.min.io
Process: 5072 ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES (code=exited, status=1/FAILURE)
Process: 5050 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCES
Main PID: 5072 (code=exited, status=1/FAILURE)
12月 08 17:03:45 nky systemd[1]: minio.service: Service hold-off time over, scheduling restart.
12月 08 17:03:45 nky systemd[1]: minio.service: Scheduled restart job, restart counter is at 5.
12月 08 17:03:45 nky systemd[1]: Stopped MinIO.
12月 08 17:03:45 nky systemd[1]: minio.service: Start request repeated too quickly.
12月 08 17:03:45 nky systemd[1]: minio.service: Failed with result 'exit-code'.
12月 08 17:03:45 nky systemd[1]: Failed to start MinIO.
it is noted something wrong with 'MINIO_VOLUMES', but I have set the variable in the /etc/default/minio
MINIO_ROOT_USER=myminioadmin
MINIO_ROOT_PASSWORD=minio-secret-key-change-me
# MINIO_VOLUMES sets the storage volume or path to use for the MinIO server.
MINIO_VOLUMES="/mnt/data"
what is wrong with my configuration?
There is nothing obvious wrong with your configuration but you did not post your service file. Almost always this is a permissions issue, you can change the systemd service user to root to test. Common issues after that are that the binary is not present in the location specified in the service file, or not executable.
after upgrading via mc command i get this error when i try to login to the (kind of new) minio console:
Post "https://fqdn.org/": dial tcp 127.0.1.1:443: connect: connection refused
I have a signed and valid SSL Certificate.
Downgrading minio (aka restore Snapshot of VM) solves the problem.
Any ideas?
This is my config:
MINIO_SERVER_URL="https://fqdn.org"
MINIO_ACCESS_KEY="key"
MINIO_VOLUMES="/mnt/hdd2/minio/"
MINIO_OPTS="-C /etc/minio --address :9000 --console-address :9001"
MINIO_SECRET_KEY="minio"
This is my minio startup log:
● minio.service - MinIO
Loaded: loaded (/etc/systemd/system/minio.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-11 08:41:14 CET; 4min 50s ago
Docs: https://docs.min.io
Process: 3567 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCESS)
Main PID: 3568 (minio)
Tasks: 9 (limit: 2351)
Memory: 101.9M
CGroup: /system.slice/minio.service
└─3568 /home/minio/minio server -C /etc/minio --address :9000 --console-address :9001 /mnt/hdd2/minio/
Nov 11 08:41:14 pmit-minio-test systemd[1]: Starting MinIO...
Nov 11 08:41:14 pmit-minio-test systemd[1]: Started MinIO.
Nov 11 08:41:17 pmit-minio-test minio[3568]: WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Nov 11 08:41:17 pmit-minio-test minio[3568]: Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Nov 11 08:41:17 pmit-minio-test minio[3568]: API: https://fqdn.org
Nov 11 08:41:17 pmit-minio-test minio[3568]: Console: https://191.164.213.7:9001 https://127.0.0.1:9001
Nov 11 08:41:17 pmit-minio-test minio[3568]: Documentation: https://docs.min.io
Please see the answer here:
https://github.com/minio/minio/issues/13639#issuecomment-966244704
I had to change this line:
MINIO_SERVER_URL="https://fqdn.org:9000"
I can not change local ES index location - can not modify path.data .
That's probably some elementary mistake, but I am stuck and greatly appreciate any assistance.
So:
Fresh local installation of ES 7.8.1 under Centos 7, everything runs correctly, if no changes were done in elasticsearch.yml:
But if I try change elasticsearch.yml:
# path.data: /var/lib/elasticsearch'
path.data: /run/media/admin/bvv2/elasticsearch/
(i.e. try to point to external disk), I get after systemctl start elasticsearch:
Job for elasticsearch.service failed because the control process exited with error code. See "systemctl status elasticsearch.service" and "journalctl -xe" for details.
where in "systemctl status elasticsearch.service" :
● elasticsearch.service - Elasticsearch
Loaded: loaded (/etc/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2020-08-17 16:23:16 MSK; 5min ago
Docs: https://www.elastic.co
Process: 12951 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 12951 (code=exited, status=1/FAILURE)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.cli.Command.main(Command.java:90)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: For complete error details, refer to the log at /var/log/elasticsearch/elasticsearch.log
Aug 17 16:23:16 bvvcomp systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Aug 17 16:23:16 bvvcomp systemd[1]: Failed to start Elasticsearch.
Aug 17 16:23:16 bvvcomp systemd[1]: Unit elasticsearch.service entered failed state.
Aug 17 16:23:16 bvvcomp systemd[1]: elasticsearch.service failed.
And in journalctl-xe:
Aug 17 16:29:20 bvvcomp NetworkManager[1112]: <info> [1597670960.1568] dhcp4 (wlp2s0): gateway 192.168.1.1
Aug 17 16:29:20 bvvcomp NetworkManager[1112]: <info> [1597670960.1569] dhcp4 (wlp2s0): lease time 25200
Aug 17 16:29:20 bvvcomp NetworkManager[1112]: <info> [1597670960.1569] dhcp4 (wlp2s0): nameserver '192.168.1.1'
Aug 17 16:29:20 bvvcomp NetworkManager[1112]: <info> [1597670960.1569] dhcp4 (wlp2s0): state changed bound -> bound
Aug 17 16:29:20 bvvcomp dbus[904]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Aug 17 16:29:20 bvvcomp dhclient[1325]: bound to 192.168.1.141 -- renewal in 12352 seconds.
Aug 17 16:29:20 bvvcomp systemd[1]: Starting Network Manager Script Dispatcher Service...
-- Subject: Unit NetworkManager-dispatcher.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit NetworkManager-dispatcher.service has begun starting up.
Aug 17 16:29:20 bvvcomp dbus[904]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Aug 17 16:29:20 bvvcomp systemd[1]: Started Network Manager Script Dispatcher Service.
-- Subject: Unit NetworkManager-dispatcher.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit NetworkManager-dispatcher.service has finished starting up.
--
-- The start-up result is done.
Aug 17 16:29:20 bvvcomp nm-dispatcher[13569]: req:1 'dhcp4-change' [wlp2s0]: new request (4 scripts)
Aug 17 16:29:20 bvvcomp nm-dispatcher[13569]: req:1 'dhcp4-change' [wlp2s0]: start running ordered scripts...
Unfortunately, these advice did not help:
How to move elasticsearch data directory? ;
elasticsearch changing path.logs and/or path.data - fails to start ;
Elasticsearch after change path.data, unable to access 'default.path.data' ;
thats probably new issue, version 7.x bounded ?
Thank you
Update 1 - error log (/var/log/elasticsearch/elasticsearch.log):
[2020-08-18T01:30:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] [bvvcomp] triggering scheduled [ML] maintenance tasks
[2020-08-18T01:30:00,014][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [bvvcomp] Deleting expired data
[2020-08-18T01:30:00,052][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [bvvcomp] Completed deletion of expired ML data
[2020-08-18T01:30:00,053][INFO ][o.e.x.m.MlDailyMaintenanceService] [bvvcomp] Successfully completed [ML] maintenance tasks
[2020-08-18T04:30:00,017][INFO ][o.e.x.s.SnapshotRetentionTask] [bvvcomp] starting SLM retention snapshot cleanup task
[2020-08-18T04:30:00,025][INFO ][o.e.x.s.SnapshotRetentionTask] [bvvcomp] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2020-08-18T05:27:08,457][INFO ][o.e.n.Node ] [bvvcomp] stopping ...
[2020-08-18T05:27:08,482][INFO ][o.e.x.w.WatcherService ] [bvvcomp] stopping watch service, reason [shutdown initiated]
[2020-08-18T05:27:08,483][INFO ][o.e.x.w.WatcherLifeCycleService] [bvvcomp] watcher has stopped and shutdown
[2020-08-18T05:27:08,495][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [bvvcomp] [controller/21903] [Main.cc#155] ML controller exiting
[2020-08-18T05:27:08,497][INFO ][o.e.x.m.p.NativeController] [bvvcomp] Native controller process has stopped - no new native processes can be started
[2020-08-18T05:27:08,540][INFO ][o.e.n.Node ] [bvvcomp] stopped
[2020-08-18T05:27:08,541][INFO ][o.e.n.Node ] [bvvcomp] closing ...
[2020-08-18T05:27:08,585][INFO ][o.e.n.Node ] [bvvcomp] closed
[2020-08-18T05:27:19,077][ERROR][o.e.b.Bootstrap ] [bvvcomp] Exception
java.lang.IllegalStateException: Unable to access 'path.data' (/run/media/admin/bvv2/elasticsearch)
at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:70) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:297) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:252) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Security.configure(Security.java:121) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:222) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) [elasticsearch-cli-7.8.1.jar:7.8.1]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-7.8.1.jar:7.8.1]
Caused by: java.nio.file.AccessDeniedException: /run/media/admin/bvv2
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]
at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:313) ~[?:?]
at java.nio.file.Files.createDirectories(Files.java:766) ~[?:?]
at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:389) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:68) ~[elasticsearch-7.8.1.jar:7.8.1]
... 12 more
Permissions:
ls -l /run/media/admin/bvv2
drwxrwsrwx 3 elasticsearch elasticsearch 4096 Aug 17 17:26 elasticsearch
ls -l /run/media/admin
total 4
drwxr-xr-x 11 admin admin 4096 Aug 17 13:22 bvv2
I encountered a similar error and it was caused by incorrect parent directory permissions.
One of the parent directory doesn't allow other unix users to access the directory, more specifically the directory's permission is drwx--x---+. Elasticsearch started after change the permission to drwx--x--x+(chmod 711). You can also try it.
Trying to upgrade TFS 2018 to DevOps Server 2019. Stuck at readiness checks screen. It wants me to remove elasticsearch service.
The service is stopped.
And the path mentioned in the service properties, specifically the Search folder, does not exist C:\Program Files\Microsoft Team Foundation Server 2018\Search\ES\elasticsearchv5\bin\elasticsearch-service-x64.exe //RS//elasticsearch-service-x64 does not exist.
No way to run service -remove from the bin folder.
Relevant log:
[Info #07:49:10.510] +-+-+-+-+-| Running Service Not Installed: Verifying the following Windows service is not installed: elasticsearch-service-x64 |+-+-+-+-+-
[Info #07:49:10.511]
[Info #07:49:10.511] +-+-+-+-+-| Verifying the following Windows service is not installed: elasticsearch-service-x64 |+-+-+-+-+-
[Info #07:49:10.511] Starting Node: VSEARCHSERVICENOTINSTALLED
[Info #07:49:10.511] NodePath : VINPUTS/Conditional/Progress/Conditional/VSEARCHSERVICENOTINSTALLED
[Info #07:49:10.511] Verifying that the following service is NOT installed: elasticsearch-service-x64. Machine: ..
[Info #07:49:10.512] Node returned: Error
[Error #07:49:10.512] The following Windows service is installed on your computer: elasticsearch-service-x64. Remove elasticsearch-service-x64 to continue. Read the troubleshooting guide (https://go.microsoft.com/fwlink/?linkid=828578) for more details.
[Info #07:49:10.512] Completed Service Not Installed: Error
[Info #07:49:10.512] -----------------------------------------------------
[Info #07:49:10.512]
[Info #07:49:10.512] +-+-+-+-+-| Running VerifySearchIndexLocation: Verifying that the search index location path is valid. |+-+-+-+-+-
[Info #07:49:10.512]
[Info #07:49:10.512] +-+-+-+-+-| Verifying that the search index location path is valid. |+-+-+-+-+-
[Info #07:49:10.512] Starting Node: VSEARCHINDEXLOCATIONVERIFIER
[Info #07:49:10.512] NodePath : VINPUTS/Conditional/Progress/Conditional/VSEARCHINDEXLOCATIONVERIFIER
[Info #07:49:10.513] Node returned: Success
[Info #07:49:10.513] Completed VerifySearchIndexLocation: Success
[Info #07:49:10.513] -----------------------------------------------------
[Info #07:49:10.513]
[Info #07:49:10.513] +-+-+-+-+-| Running Verify ElasticSearch port is available: Verifying that a port is available in range 9200-9299 |+-+-+-+-+-
[Info #07:49:10.513]
[Info #07:49:10.513] +-+-+-+-+-| Verifying that a port is available in range 9200-9299 |+-+-+-+-+-
[Info #07:49:10.513] Starting Node: VSEARCHESPORTAVAILABLE
[Info #07:49:10.513] NodePath : VINPUTS/Conditional/Progress/Conditional/VSEARCHESPORTAVAILABLE
[Info #07:49:10.514] Port: 9200 is available for configuring elasticsearch
[Info #07:49:10.514] Node returned: Success
[Info #07:49:10.514] Completed Verify ElasticSearch port is available: Success
[Info #07:49:10.514] -----------------------------------------------------
[Info #07:49:10.514]
[Info #07:49:10.514] +-+-+-+-+-| Running VerifySearchServiceAccount: Verifying that the search service account name and password is valid. |+-+-+-+-+-
[Info #07:49:10.514]
[Info #07:49:10.515] +-+-+-+-+-| Verifying that the search service account name and password is valid. |+-+-+-+-+-
[Info #07:49:10.515] Starting Node: VSEARCHACCOUNTVALID
[Info #07:49:10.515] NodePath : VINPUTS/Conditional/Progress/Conditional/VSEARCHACCOUNTVALID
[Info #07:49:10.515] Node returned: Success
[Info #07:49:10.515] Completed VerifySearchServiceAccount: Success
[Info #07:49:10.515] -----------------------------------------------------
In that case, just run the command in cmd as an administrator
sc delete elasticsearch-service-x64
According to microsoft documentation, the sc delete command removes the service from the registry without the need for an executable in the specified path.
After this command close all windows referring to the windows service and request a new check in Azure DevOps Server 2019
I am using a Raspberry Pi. To reduce I/O on my SD-Card I symlink all important log files to an external USB-mounted Harddrive.
Example:
ln -s /media/usb-device/logs/auth.log /var/log/auth.log
The logging works fine. But fail2ban seems not to like that. When I enable my ssh-monitoring in my /etc/fail2ban/jail.local file,
# [sshd]
enabled = true
bantime = 3600
fail2ban crash during executing this command systemctl restart fail2ban.service
I have tried to hardcode the path:
# logpath = %(sshd_log)s
logpath = /media/usb-devive/logs/auth.log
But fail2ban throws the same error:
fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-04-28 20:42:33 CEST; 45s ago
Docs: man:fail2ban(1)
Process: 3014 ExecStop=/usr/bin/fail2ban-client stop (code=exited, status=0/SUCCESS)
Process: 3045 ExecStart=/usr/bin/fail2ban-client -x start (code=exited, status=255)
Main PID: 658 (code=killed, signal=TERM)
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Service hold-off time over, scheduling restart.
Apr 28 20:42:33 raspberrypi systemd[1]: Stopped Fail2Ban Service.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Start request repeated too quickly.
Apr 28 20:42:33 raspberrypi systemd[1]: Failed to start Fail2Ban Service.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Unit entered failed state.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Failed with result 'exit-code'.
Any ideas?
"devive" in the logpath is spelt incorrectly