Issue with Let's Enrypt certificates and auto renewal - lets-encrypt

I'm not able to renew my certificate with Let's Encrypt.
To renew my certificates I'm using the command : certbot renew --dry-run
My version of certbot : certbot 0.31.0
My version of UBUNTU : Ubuntu 18.04.3 LTS
Error message :
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
All renewal attempts failed. The following certs could not be renewed:
/etc/letsencrypt/live/**************/fullchain.pem (failure)
/etc/letsencrypt/live/**************/fullchain.pem (failure)
/etc/letsencrypt/live/**************/fullchain.pem (failure)
/etc/letsencrypt/live/**************/fullchain.pem (failure)
/etc/letsencrypt/live/**************/fullchain.pem (failure)
/etc/letsencrypt/live/**************/fullchain.pem (failure)
/etc/letsencrypt/live/**************/fullchain.pem (failure)
/etc/letsencrypt/live/**************/fullchain.pem (failure)

Try this instead
sudo certbot renew

Related

How to debug GitHub Action failure

Just yesterday a stable GitHub Action (CI) started failing rather cryptically and I've run out of tools to debug it.
All I can think of is our BUNDLE_ACCESS_TOKEN went bad somehow but I didn't set that up. It's an Action secret under Repository Secrets that are not visible in GitHub UI. How can I test to see if it's valid?
Or maybe it's something else?!? "Bad credentials" is vague...
Here's the meat of the action we're trying to run:
#my_tests.yml
jobs:
my-test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:13.4
env:
POSTGRES_USERNAME: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: myapp_test
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
env:
RAILS_ENV: test
POSTGRES_HOST: localhost
POSTGRES_USERNAME: pg
POSTGRES_PASSWORD: pg
GITHUB_TOKEN: ${{ secrets.BUNDLE_ACCESS_TOKEN }}
BUNDLE_GITHUB__COM: x-access-token:${{ secrets.BUNDLE_ACCESS_TOKEN }}
CUCUMBER_FORMAT: progress
steps:
- uses: actions/checkout#v2
- uses: FedericoCarboni/setup-ffmpeg#v1
...
And with debug turned on here's the Failure (line 20) from GitHub Actions:
Run FedericoCarboni/setup-ffmpeg#v1
1 ------- ##[debug]Evaluating condition for step: 'Run FedericoCarboni/setup-ffmpeg#v1'
2 ##[debug]Evaluating: success()
3 ##[debug]Evaluating success:
4 ##[debug]=> true
5 ##[debug]Result: true
6 ##[debug]Starting: Run FedericoCarboni/setup-ffmpeg#v1
7 ##[debug]Loading inputs
8 ##[debug]Loading env
9 Run FedericoCarboni/setup-ffmpeg#v1
10 with:
11 env:
12 RAILS_ENV: test
13 POSTGRES_HOST: localhost
14 POSTGRES_USERNAME: pg
15 POSTGRES_PASSWORD: pg
16 GITHUB_TOKEN: ***
17 BUNDLE_GITHUB__COM: x-access-token:***
19 CUCUMBER_FORMAT: progress
20 Error: Bad credentials
21 ##[debug]Node Action run completed with exit code 1
22 ##[debug]Finishing: Run FedericoCarboni/setup-ffmpeg#v1
Thanks for any help.
For your particular case try scoping GITHUB_TOKEN and BUNDLE_GITHUB__COM only to steps that actually use it instead the whole job.
Also consider switching to FedericoCarboni/setup-ffmpeg#v2 it has built in support for github.token.
Generic GH Action Debugging
https://github.com/nektos/act
Run actions locally. Mostly gives you faster feedback for experiments.
https://github.com/mxschmitt/action-tmate
Allows you to create interactive remote session where you can poke around.

Travis doesn't deploy on heroku. Invalid credentials

I'm trying to set up automatic deploy on heroku with travis. I get this error when travis try to deploy:
API request failed.
Message: Invalid credentials provided.
Reference:
failed to deploy
This is my travis file:
jobs:
include:
- language: python
python:
- "3.6"
install:
- pip install -r Deployment/requirements.txt
script:
- python -c "print ('Testing some script')"
branches:
only:
- master
- develop
before_deploy:
- cd Deployment
deploy:
- provider: heroku
skip_cleanup: true
api_key:
secure: b3AVdCtJ2e/+Gu1...
app:
master: motorent-deploy
develop: motorent-apitest
- language: android
dist: trusty
env:
global:
- ANDROID_API_LEVEL=29
- ANDROID_BUILD_TOOLS_VERSION=29.0.3
- extra-google-google_play_services
- extra-google-m2repository
- extra-android-m2repository
- addon-google_apis-google-$ANDROID_API_LEVEL
android:
licenses:
- 'android-sdk-preview-license-.+'
- 'android-sdk-license-.+'
- 'google-gdk-license-.+'
components:
- tools
- platform-tools
- android-$ANDROID_API_LEVEL
- build-tools-$ANDROID_BUILD_TOOLS_VERSION
- extra-google-google_play_services
- extra-google-m2repository
- extra-android-m2repository
- addon-google_apis-google-$ANDROID_API_LEVEL
addons:
apt:
packages:
ant
before_install:
- touch $HOME/.android/repositories.cfg
- yes | sdkmanager "platforms;android-29"
- yes | sdkmanager "build-tools;29.0.3"
before_script:
- cd AndroidApp
- chmod +x gradlew
script:
- ./gradlew build check
As you can see I have two differents projects in the same repository, but it's not important, because the android test works well. What doesn't work is the deploy of Flask project. The solutions that I have found talk about the need to encrypt the api_key. I have tested it with Travis encrypt $(heroku auth:token) but it doesn't work either.
I've been trying to find the error for a long time but I don't know what it can be.
I had the same error.
Here are steps that I performed to fix it.
Firstly I tried the command: heroku auth:token
but the output was:
› Warning: token will expire 06/06/2021
› Use heroku authorizations:create to generate a long-term token
Then I tried the command: heroku authorizations:create
One line from the output contained Token: <created_heroku_auth_token>
I took the value of it (<created_heroku_auth_token>)
and I went to
https://travis-ci.org/github/<my_github_user>/<my_repo>/settings
where I created new environment variable:
HEROKU_AUTH_TOKEN with value of my <created_heroku_auth_token>
Then in my .travis.yml I changed value of api_key to:
api_key: $HEROKU_AUTH_TOKEN
After pushing this change, the deployment to heroku went fine.

Cant able to start auditbeat

Hi i am using elk stack of version 7.1.1 with x-pack installed and i'm trying to configure and setup Auditbeat but it's showing the following error on startup :
ERROR instance/beat.go:916 Exiting: 2 errors: 1 error: failed to create audit client: failed to get audit status: operation not permitted; 1 error: unable to create DNS sniffer: failed creating af_packet sniffer: operation not permitted
Exiting: 2 errors: 1 error: failed to create audit client: failed to get audit status: operation not permitted; 1 error: unable to create DNS sniffer: failed creating af_packet sniffer: operation not permitted
My auditfile conf
auditbeat.modules:
- module: auditd
audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
audit_rules: |
- module: file_integrity
paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
- module: system
datasets:
- host
- login
- package
- process
- socket
- user
state.period: 12h
user.detect_password_changes: true
login.wtmp_file_pattern: /var/log/wtmp*
login.btmp_file_pattern: /var/log/btmp*
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
setup.kibana:
host: "localhost:5601"
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "mypassword"
Please help me solve it.
I would assume you have lauched auditbeat under unprivileged user.
Due to auditbeat has to interact with auditd, most of activities should be performed by root. [at least root rights solved the same issue in my case]
PS: if you can't switch to root try this:
link

filesystemexception elasticsearch keystore device or resource busy

I want to build elasticsearch (7.3.0) and run in kubernetes,but i get error
Error:
Exception in thread "main" java.nio.file.FileSystemException:
/usr/share/elasticsearch/config/elasticsearch.keystore.tmp
/usr/share/elasticsearch/config/elasticsearch.keystore: Device or
resource busy
my step
create secret
kubectl create secret generic elasticsearch-keystore --from-file=./elasticsearch.keystore
set secretmount
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates
path: /usr/share/elasticsearch/config/certs
secretName: elasticsearch-keystore
path: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
i tried to change elasticsearch.keystore mode g+s , but doesn't work
Is there something I am missing?thanks

Remote postgres connection on circleci build to run laravel phpunit tests

We are using laravel 5.6, postgresql and circleci in our api production environment and still trying to implement some key unit tests to run before a commit is merged to master.
When trying to configure the remote postgresql database access on circle, there's the following problem:
Our .circleci/config.yml was supposed to pull a custom built image (edunicastro/docker:latest) and run phpunit tests in the "build" step
But we are getting the following error message:
PDOException: SQLSTATE[08006] [7] could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
The problem is this was supposed to connect to our remote database, but in our production environment we have the connection set up using .env and laravel.
I have tried copying the "DB_PGSQL_HOST" key to our config.yml but nothing changed, it kept trying to connect to 127.0.0.1.
Using the key "PGHOST" instead also had no effect.
This is the relevant, "build" part of our config.yml:
version: 2
jobs:
build:
docker:
- image: edunicastro/docker:latest
environment:
DB_PGSQL_CONNECTION: <prod_laravel_connection_name>
DB_PGSQL_HOST: <prod_db_host>
DB_PGSQL_PORT: 5432
DB_PGSQL_DATABASE: <prod_db_name>
working_directory: ~/repo
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "composer.json" }}
- v1-dependencies-
- run: composer install -n --prefer-dist
- run: ./vendor/bin/phpunit
- save_cache:
paths:
- ./vendor
key: v1-dependencies-{{ checksum "composer.json" }}
Okay, I was missing the command to copy the .env over, right under - checkout:
- checkout
- run: cp .env.test .env
Laravel was already configured and set to use it, so I didn't need to change anything else.

Resources