Problems with renewing certificates after ACME API upgrade to V2 - lets-encrypt

We are using acme.sh to renew our let's encrypt certificates and ran into problems today.
First we got some errors and ran into the rate limit for invalid requests often and therefore decided to upgrade to V2 as it was recommended anyhow.
We upgraded by running acme.sh --upgrade and updated all the URL's in our domains config to use the new v2 endpoints.
Now the acme.sh --renew -d my.domain.at --ecc runs further than before (we had some troubles where we couldn't get nonce because we were missing the /directory postfix in the Le_API variable.
Now we have the problem that we receive an unauthorized in our verification:
{
"type": "http-01",
"status": "invalid",
"error": {
"type": "urn:ietf:params:acme:error:unauthorized",
"detail": "Invalid response from https://my.domain.at/login [...]: \"\u003c!DOCTYPE html\u003e\\n\u003chtml\u003e\\n \u003chead\u003e\\n \u003cmeta http-equiv=\\\"X-UA-Compatible\\\" content=\\\"IE=edge\\\"/\u003e\\n \u003cmeta charset=\\\"utf-8\\\"/\u003e\"",
"status": 403
},
"url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/11110584350/BRAIDw",
"token": "my-hash",
"validationRecord": [
{
"url": "http://my.domain.at/.well-known/acme-challenge/my-hash",
"hostname": "my.domain.at",
"port": "80",
"addressesResolved": [
"X.X.X.X",
"..."
],
"addressUsed": "..."
},
{
"url": "https://my.domain.at/.well-known/acme-challenge/my-hash",
"hostname": "my.domain.at",
"port": "443",
"addressesResolved": [
"X.X.X.X",
"..."
],
"addressUsed": "..."
},
{
"url": "https://my.domain.at/login",
"hostname": "my.domain.at",
"port": "443",
"addressesResolved": [
"X.X.X.X",
"..."
],
"addressUsed": "..."
}
]
}
We have an NGINX running and we are not sure what's happening here. We shouldn't be redirected to the /login page as far as we understood.
Are we missing anything? The certificate renewal always worked flawlessly until we ran into problems today and tried to upgrade.

It turned out our NGINX configuration was wrong, we configured the required route .well-known/acme-challenge to point to an empty folder and are asking ourselves now how that could have ever worked. ¯_(ツ)_/¯
After fixing the config in NGINX everything worked as expected.

Related

apache/apisix working mocking plugin example

I've installed apisix and apisix-dashboard with helm on my k8s cluster.
I used all defaults except APIKEY for admin and viewer acc., and custom username/password for dashboard. So I'm currently running the 2.15 version.
My installation steps
helm repo add apisix https://charts.apiseven.com
helm repo update
# installing apisix/apisix
helm install --set-string admin.credentials.admin="new_api_key"
--set-string admin.credentials.viewer="new_api_key" apisix apisix/apisix --create-namespace --namespace my-apisix
# installing apisix/apisix-dashboard, where values.yaml contains username/password
helm install -f values.yaml apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace my-apisix
I'm unable to configure the mocking plugin, I've been following the docs.
In the provided example I'm unable to call the API on route with ID 1, so I've created a custom route and after that used the VIEW json, where I've changed the configuration accordingly to the sample provided.
All calls on this routes are returning 502 errors, in the logs i can see the route is routing traffic to a non existing server. All of that leads me to believe that the mocking plugin is disabled.
Example of my route:
{
"uri": "/mock-test.html",
"name": "mock-sample-read",
"methods": [
"GET"
],
"plugins": {
"mocking": {
"content_type": "application/json",
"delay": 1,
"disable": false,
"response_schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"properties": {
"a": {
"type": "integer"
},
"b": {
"type": "integer"
}
},
"required": [
"a",
"b"
],
"type": "object"
},
"response_status": 200,
"with_mock_header": true
}
},
"upstream": {
"nodes": [
{
"host": "127.0.0.1",
"port": 1980,
"weight": 1
}
],
"timeout": {
"connect": 6,
"send": 6,
"read": 6
},
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"keepalive_pool": {
"idle_timeout": 60,
"requests": 1000,
"size": 320
}
},
"status": 1
}
Can anyone provide me with an actual working example or point out what I'm missing? Any suggestions are welcomed.
EDIT:
Looking at the logs of the apache/apisix:2.15.0-alpine it looks like this mocking plugin is disabled. Looking at the docs The mocking Plugin is used for mocking an API. When executed, it returns random mock data in the format specified and the request is not forwarded to the Upstream.
Error logs where I've changed the domain and IP addr. suggest that the traffic is being redirected to the upstream:
10.10.10.24 - - [23/Sep/2022:11:33:16 +0000] my.domain.com "GET /mock-test.html HTTP/1.1" 502 154 0.001 "-" "PostmanRuntime/7.29.2" 127.0.0.1:1980 502 0.001 "http://my.domain.com"
Globally plugins are enabled, I've tested using the Keycloak plugin.
EDIT 2: Could this be a bug in version 2.15 of apisix? There is currently no open issue on the github repo.
yes, mocking plugin is not enabled.
you can just add it here.
https://github.com/apache/apisix-helm-chart/blob/7ddeca5395a2de96acd06bada30f3ab3580a6252/charts/apisix/values.yaml#L219-L269
You can also submit a PR directly to fix it

Google home actions.fulfillment.devices not getting enabled

I am using google smarthome actions for IOT... I updated my action url and account linking details. When i am trying to enable the Test in simulator to deploy my TestAPP to cloud, it fails and it gets me an error "GoogleFulfillment 'actions.fulfillment.devices' is not supported" and the linked app not update old URL. This worked a few days ago. Any changes from google side or anybody has any clue ?
There is a manual workaround. Thanks for Google Assistatant forum:
Steps:
1 - Download the gactions cli at https://developers.google.com/actions/tools/gactions-cli
2 - Authenticate with any command:
./gactions list --project [YOUT_PROJECT_ID]
3 - Download the json representation of your action:
./gactions get --project [YOUR_PROJECT_ID] --version draft > action.json
4 - Edit the json. Extract the only object from its array, remove the nested “googleFulfillments” object:
"googleFulfillments": [
{
"endpoint": {
"baseUrl": "[URL]"
},
"name": "actions.fulfillment.devices"
}
],
5 - Delete the brackets "[ ]" on the top and end of file. Only one language can be activated at a time. Delete all data from the action.json file unnecessary. The file looks like this, with its parameters:
{
"accountLinking": {
"accessTokenUrl": "xxxx",
"assertionTypes": [
"ID_TOKEN"
],
"authenticationUrl": "xxx",
"clientId": "xxx",
"clientSecret": "xxxx",
"grantType": "AUTH_CODE"
},
"actions": [
{
"description": "Smart home action for project xxxxxxx",
"fulfillment": {
"conversationName": "AoGSmartHomeConversation_xxxxxx"
},
"name": "actions.devices"
}
],
"conversations": {
"AoGSmartHomeConversation_xxxxxxxx": {
"name": "",
"url": "xxxxxxx"
}
},
"locale": "en",
"manifest": {
"category": "xxx",
"companyName": "xxx",
"contactEmail": "xxx",
"displayName": "xxx",
"largeLandscapeLogoUrl": "xxxxxx",
"longDescription": "xxxx",
"privacyUrl": "xxx",
"shortDescription": "xxxx",
"smallSquareLogoUrl": "xxxx",
"termsOfServiceUrl": "xxxxx",
"testingInstructions": "xxxxx"
}
}
6 - If you have updated the URL of fulfillment, authentication or token, go to Google Actions Console and update his entry on there;
7 - Push your fixed action into test:
./gactions test --project [YOUR_PROJECT_ID] --action_package ./action.json
This replaces the step " Click Simulator under TEST" in the google assistant manual setup. It worked for me!
More help here: https://community.home-assistant.io/t/google-assistant-trouble-shooting/99223/142

Chat bot Hands-Off published on azure - gives connection error emulator :Cannot post activity. Unauthorized

I have been trying to host bot which works on local to the Azure Hosting.
I'm trying to connect hosted bot with local emulator gives connection error (emulator :Cannot post activity. Unauthorized).
My .bot file:
{
"name": "production",
"description": "",
"services": [
{
"type": "endpoint",
"appId": "********************",
"appPassword": "*************",
"endpoint": "intermediatorbotsample2019.azurewebsites.net/api/messages",
"name": "AzureAccountLive",
"id": "178"
}
],
"padlock": "",
"version": "2.0",
"path": "D:\\Architecture\IntermediatorBot\\production.bot",
"overrides": null
}
I took a look at your bot file in your comment. The problem is that you have "name": "AzureAccountLive" in your services section. This name MUST be "production". The outer level "name" has to match the name of the bot (in this case, it's probably intermediatorbotsample2019). It's the Name:Production, Type:Endpoint combination that ABS looks for. If you update your botfile to match what I have below, your bot should work as expected.
{
"name": "YOURBOTNAMEHERE",
"description": "",
"services": [
{
"type": "endpoint",
"appId": "********************",
"appPassword": "*************",
"endpoint": "http://intermediatorbotsample2019.azurewebsites.net/api/messages",
"name": "production",
"id": "178"
}
],
"padlock": "",
"version": "2.0",
"path": "D:\\Architecture\IntermediatorBot\\production.bot",
"overrides": null
}
Re genreted AppId and secret from https://dev.botframework.com/.
Previously was using AzureBotProject replaced it with AzureBotChannelProject instead.

Composer: The checksum verification of the file failed

I am hosting my own Satis repository via GitLab pages.
For some reason however, it keeps on erroring with The checksum verification of the file failed when composer tries to install one of the repositories.
Its happening on and off for a few weeks now, and I can't work out why its happening or how to debug it.
Here is my satis.json:
{
"name": "Composer Repository",
"homepage": "<snip>",
"repositories": [
{"type": "vcs", "url": "<snip>"},
{"type": "vcs", "url": "<snip>"}
],
"require-all": true,
"require-dependencies": true,
"archive": {
"directory": "dist",
"format": "zip",
"skip-dev": true,
"checksum": false
}
}
I added checksum: false a week ago which seemed to fix the issue, but now it has just arrived again.
I've tried clearing caches and nothing is working. Any ideas?

Failure to connect to proxy "Certificate signed by unknown authority"

I'm attempting to connect to a CloudSQL instance via a cloudsql-proxy container on my Kubernetes deployment. I have the cloudsql credentials mounted and the value of GOOGLE_APPLICATION_CREDENTIALS set.
However, I'm still receiving the following error in my logs:
2018/10/08 20:07:28 Failed to connect to database: Post https://www.googleapis.com/sql/v1beta4/projects/[projectID]/instances/[appName]/createEphemeral?alt=json&prettyPrint=false: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: x509: certificate signed by unknown authority
My connection string looks like this:
[dbUser]:[dbPassword]#cloudsql([instanceName])/[dbName]]?charset=utf8&parseTime=True&loc=Local
And the proxy dialer is shadow-imported as:
_ github.com/GoogleCloudPlatform/cloudsql-proxy/proxy/dialers/mysql
Anyone have an idea what might be missing?
EDIT:
Deployment Spec looks something like this (JSON formatted):
{
"replicas": 1,
"selector": {
...
},
"template": {
...
"spec": {
"containers": [
{
"image": "[app-docker-imager]",
"name": "...",
"env": [
...
{
"name": "MYSQL_PASSWORD",
...
},
{
"name": "MYSQL_USER",
...
},
{
"name": "GOOGLE_APPLICATION_CREDENTIALS",
"value": "..."
}
],
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
},
{
"command": [
"/cloud_sql_proxy",
"-instances=...",
"-credential_file=..."
],
"image": "gcr.io/cloudsql-docker/gce-proxy:1.11",
"name": "...",
"ports": [
{
"containerPort": 3306,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
}
],
"volumes": [
{
"name": "[secrets-mount-name]",
"secret": {
"defaultMode": 420,
"secretName": "[secrets-mount-name]"
}
}
]
}
}
}
The error message indicates that your client is not able to trust the certificate of https://www.googleapis.com. There are two possible causes for this:
Your client does not know what root certificates to trust. The official cloudsql-proxy docker image includes root certificates, so if you are using that image, this is not your problem. If you are not using that image, you should (or at least install ca certificates in your image).
Your outbound traffic is being intercepted by a proxy server that is using a different, untrusted, certificate. This might be malicious (in which case you need to investigate who is intercepting your traffic). More benignly, you might be in a organization using an outbound proxy to inspect traffic according to policy. If this is the case, you should build a new docker image that includes the CA certificate used by your organization's outbound proxy.

Resources