apache/apisix working mocking plugin example - apache-apisix

I've installed apisix and apisix-dashboard with helm on my k8s cluster.
I used all defaults except APIKEY for admin and viewer acc., and custom username/password for dashboard. So I'm currently running the 2.15 version.
My installation steps
helm repo add apisix https://charts.apiseven.com
helm repo update
# installing apisix/apisix
helm install --set-string admin.credentials.admin="new_api_key"
--set-string admin.credentials.viewer="new_api_key" apisix apisix/apisix --create-namespace --namespace my-apisix
# installing apisix/apisix-dashboard, where values.yaml contains username/password
helm install -f values.yaml apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace my-apisix
I'm unable to configure the mocking plugin, I've been following the docs.
In the provided example I'm unable to call the API on route with ID 1, so I've created a custom route and after that used the VIEW json, where I've changed the configuration accordingly to the sample provided.
All calls on this routes are returning 502 errors, in the logs i can see the route is routing traffic to a non existing server. All of that leads me to believe that the mocking plugin is disabled.
Example of my route:
{
"uri": "/mock-test.html",
"name": "mock-sample-read",
"methods": [
"GET"
],
"plugins": {
"mocking": {
"content_type": "application/json",
"delay": 1,
"disable": false,
"response_schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"properties": {
"a": {
"type": "integer"
},
"b": {
"type": "integer"
}
},
"required": [
"a",
"b"
],
"type": "object"
},
"response_status": 200,
"with_mock_header": true
}
},
"upstream": {
"nodes": [
{
"host": "127.0.0.1",
"port": 1980,
"weight": 1
}
],
"timeout": {
"connect": 6,
"send": 6,
"read": 6
},
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"keepalive_pool": {
"idle_timeout": 60,
"requests": 1000,
"size": 320
}
},
"status": 1
}
Can anyone provide me with an actual working example or point out what I'm missing? Any suggestions are welcomed.
EDIT:
Looking at the logs of the apache/apisix:2.15.0-alpine it looks like this mocking plugin is disabled. Looking at the docs The mocking Plugin is used for mocking an API. When executed, it returns random mock data in the format specified and the request is not forwarded to the Upstream.
Error logs where I've changed the domain and IP addr. suggest that the traffic is being redirected to the upstream:
10.10.10.24 - - [23/Sep/2022:11:33:16 +0000] my.domain.com "GET /mock-test.html HTTP/1.1" 502 154 0.001 "-" "PostmanRuntime/7.29.2" 127.0.0.1:1980 502 0.001 "http://my.domain.com"
Globally plugins are enabled, I've tested using the Keycloak plugin.
EDIT 2: Could this be a bug in version 2.15 of apisix? There is currently no open issue on the github repo.

yes, mocking plugin is not enabled.
you can just add it here.
https://github.com/apache/apisix-helm-chart/blob/7ddeca5395a2de96acd06bada30f3ab3580a6252/charts/apisix/values.yaml#L219-L269
You can also submit a PR directly to fix it

Related

web app works locally and on app engine, but not on cloud run

So I've run into this issue with a web app I've made:
it gets a file path as input
if the file exists on a bucket, it uses a python client api to create a compute engine instance
it passes the file path to the instance in the startup script
When I ran it locally, I created a python virtual environment and then ran the app. When I make the input on the web browser, the virtual machine is created by the api call. I assumed it used my personal account. I changed to the service account in the command line with this command 'gcloud config set account', it ran fine once more.
When I simply go to the source code directory deploy it as is, the application can create the virtual machine instances as well.
When I use Google cloud build and deploy to cloud run, it doesn't create the vm instance.
the web app itself is not throwing any errors, but when I check compute engine's logs, there is an error in the logs:
`{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "INVALID_PARAMETER"
},
"authenticationInfo": {
"principalEmail": "####"
},
"requestMetadata": {
"callerIp": "#####",
"callerSuppliedUserAgent": "(gzip),gzip(gfe)"
},
"serviceName": "compute.googleapis.com",
"methodName": "v1.compute.instances.insert",
"resourceName": "projects/someproject/zones/somezone/instances/nameofinstance",
"request": {
"#type": "type.googleapis.com/compute.instances.insert"
}
},
"insertId": "######",
"resource": {
"type": "gce_instance",
"labels": {
"instance_id": "#####",
"project_id": "someproject",
"zone": "somezone"
}
},
"timestamp": "2021-06-16T12:18:21.253551Z",
"severity": "ERROR",
"logName": "projects/someproject/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operation-#####",
"producer": "compute.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-06-16T12:18:21.253551Z"
}`
In theory, it is the same exact code that worked from my laptop and on app engine. I'm baffled why it only does this for cloud run.
App engines default service account was stripped of all its roles and given a custom role tailored to the web apps function.
The cloud run is using a different service account, but was given that exact same custom role.
Here is the method I use to call the api.
def create_instance(path):
compute = googleapiclient.discovery.build('compute', 'v1')
vmname = "piinnuclei" + date.today().strftime("%Y%m%d%H%M%S")
startup_script = "#! /bin/bash\napt update\npip3 install pg8000\nexport BUCKET_PATH=my-bucket/{}\ngsutil -m cp -r gs://$BUCKET_PATH /home/connor\ncd /home/connor\n./cloud_sql_proxy -dir=cloudsql -instances=sql-connection-name=unix:sql-connection-name &\npython3 run_analysis_upload.py\nexport ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')\nexport NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')\ngcloud --quiet compute instances delete $NAME --zone=$ZONE".format(path)
config = {
"kind": "compute#instance",
"name": vmname,
"zone": "projects/my-project/zones/northamerica-northeast1-a",
"machineType": "projects/my-project/zones/northamerica-northeast1-a/machineTypes/e2-standard-4",
"displayDevice": {
"enableDisplay": False
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": startup_script
}
]
},
"tags": {
"items": []
},
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": vmname,
"initializeParams": {
"sourceImage": "projects/my-project/global/images/my-image",
"diskType": "projects/my-project/zones/northamerica-northeast1-a/diskTypes/pd-balanced",
"diskSizeGb": "100"
},
"diskEncryptionKey": {}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"kind": "compute#networkInterface",
"subnetwork": "projects/my-project/regions/northamerica-northeast1/subnetworks/default",
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"networkTier": "PREMIUM"
}
],
"aliasIpRanges": []
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True,
"nodeAffinities": []
},
"deletionProtection": False,
"reservationAffinity": {
"consumeReservationType": "ANY_RESERVATION"
},
"serviceAccounts": [
{
"email": "batch-service-accountg#my-project.iam.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"shieldedInstanceConfig": {
"enableSecureBoot": False,
"enableVtpm": True,
"enableIntegrityMonitoring": True
},
"confidentialInstanceConfig": {
"enableConfidentialCompute": False
}
}
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
The issue was with the zone. For some reason, when it was ran on cloud run, the code below was the culprit.
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
"northamerica-northeast1" should have been "northamerica-northeast1-a"
EDIT:
I made a new virtual machine image and quickly ran into the same problem, it would work locally and break down in the cloud run environment. After letting it sit for some time, it began to work again. This is leading me to the conclusion that there is also some sort of delay before it can be called by cloud run.

Problems with renewing certificates after ACME API upgrade to V2

We are using acme.sh to renew our let's encrypt certificates and ran into problems today.
First we got some errors and ran into the rate limit for invalid requests often and therefore decided to upgrade to V2 as it was recommended anyhow.
We upgraded by running acme.sh --upgrade and updated all the URL's in our domains config to use the new v2 endpoints.
Now the acme.sh --renew -d my.domain.at --ecc runs further than before (we had some troubles where we couldn't get nonce because we were missing the /directory postfix in the Le_API variable.
Now we have the problem that we receive an unauthorized in our verification:
{
"type": "http-01",
"status": "invalid",
"error": {
"type": "urn:ietf:params:acme:error:unauthorized",
"detail": "Invalid response from https://my.domain.at/login [...]: \"\u003c!DOCTYPE html\u003e\\n\u003chtml\u003e\\n \u003chead\u003e\\n \u003cmeta http-equiv=\\\"X-UA-Compatible\\\" content=\\\"IE=edge\\\"/\u003e\\n \u003cmeta charset=\\\"utf-8\\\"/\u003e\"",
"status": 403
},
"url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/11110584350/BRAIDw",
"token": "my-hash",
"validationRecord": [
{
"url": "http://my.domain.at/.well-known/acme-challenge/my-hash",
"hostname": "my.domain.at",
"port": "80",
"addressesResolved": [
"X.X.X.X",
"..."
],
"addressUsed": "..."
},
{
"url": "https://my.domain.at/.well-known/acme-challenge/my-hash",
"hostname": "my.domain.at",
"port": "443",
"addressesResolved": [
"X.X.X.X",
"..."
],
"addressUsed": "..."
},
{
"url": "https://my.domain.at/login",
"hostname": "my.domain.at",
"port": "443",
"addressesResolved": [
"X.X.X.X",
"..."
],
"addressUsed": "..."
}
]
}
We have an NGINX running and we are not sure what's happening here. We shouldn't be redirected to the /login page as far as we understood.
Are we missing anything? The certificate renewal always worked flawlessly until we ran into problems today and tried to upgrade.
It turned out our NGINX configuration was wrong, we configured the required route .well-known/acme-challenge to point to an empty folder and are asking ourselves now how that could have ever worked. ¯_(ツ)_/¯
After fixing the config in NGINX everything worked as expected.

Heroku Pipelines: You need to have the deploy or operate permission on this app

I've inherited an app hosted on Heroku which uses review apps. Right up until the day before I took over responsibility for the system, the review apps were working absolutely fine, pulling in the branch, building it, then using a postdeploy command to pull in a database backup from the staging server.
Then I started, and all of a sudden, it's not working. I don't know if this is related to something I've done (which at this point is very little), or maybe an actual permissions issue (I've been set-up as an admin on everything, although the other developers, who this was working for before, are also unable to complete it) but the final step of pulling in the database is failing:
I'm at a complete loss as to what's going wrong here.
Below is the app.json file being used, and the $HEROKU_DATABASE_RESTORE is set to clixifix-staging-eu::b530 (which is the staging server::backup file).
{
"buildpacks": [
{ "url": "heroku/nodejs" },
{ "url": "heroku/ruby" },
{ "url": "heroku-community/nginx" }
],
"environments": {
"review": {
"addons": [
{
"plan": "heroku-postgresql:hobby-basic",
"options": {
"version": "9.6"
}
},
{ "plan": "memcachedcloud:30" },
{ "plan": "mailtrap:unpaid" }
],
"buildpacks": [
{ "url": "heroku/nodejs" },
{ "url": "heroku/ruby" },
{ "url": "heroku-community/nginx" },
{ "url": "heroku-community/cli" }
],
"env": {
"SECRET_KEY_BASE": {
"generator": "secret"
}
},
"formation": {
"web": {
"quantity": 1,
"size": "hobby"
},
"generalworker": {
"quantity": 1,
"size": "hobby"
},
"reportworker": {
"quantity": 1,
"size": "hobby"
}
},
"scripts": {
"postdeploy": "heroku pg:backups:restore $HEROKU_DATABASE_RESTORE DATABASE_URL -a $HEROKU_APP_NAME --confirm $HEROKU_APP_NAME"
}
}
}
}
I reached out to Heroku, who gave me the answer I needed:
What the issue is most likely for the error in the postdeploy, is that to run:
heroku pg:backups:restore $HEROKU_DATABASE_RESTORE DATABASE_URL -a $HEROKU_APP_NAME --confirm $HEROKU_APP_NAME
You will need a platform API key stored somewhere withing your pipeline review app config vars so the CLI can log in. The user who this API key belongs to has most likely lost access to your team and doesn't have permissions to access your review apps. You should generate a new API key using heroku authorizations:create and update it on your pipeline.
Basically, when the old guy left, his permissions were revoked, causing the error. I generated a new key using the command above, set the token as the HEROKU_API_KEY value within the envars in the review app settings, and it worked.

Heroku Pipelines: Configuration from app.json is being ignored

I am trying to setup Heroku Pipelines and would like to configure my enviroment and build process using app.json. But my app.json is being ignored.
That's how my repo looks like:
- Dir1
- Dir2
- ...
- app.json
- some other files
I made a simple app.json but no buildpack is being installed, no database provisioned and so on.
{
"formation": {
"web": {
"quantity": 1,
"size": "free"
},
"worker": {
"quantity": 1,
"size": "free"
}
},
"addons": [
{
"plan": "heroku-postgresql:hobby-dev"
},
{
"plan": "heroku-redis:hobby-dev"
}
],
"buildpacks": [
{
"url": "heroku/nodejs"
}
]
}
Does anyone have an idea why the configuration from app.json is not being used?
So, here is the solution the problem: The configuration in app.json will only be used when the heroku app is created the first time. So you need to delete the app and create a new one to see the changes taking place.

Google home actions.fulfillment.devices not getting enabled

I am using google smarthome actions for IOT... I updated my action url and account linking details. When i am trying to enable the Test in simulator to deploy my TestAPP to cloud, it fails and it gets me an error "GoogleFulfillment 'actions.fulfillment.devices' is not supported" and the linked app not update old URL. This worked a few days ago. Any changes from google side or anybody has any clue ?
There is a manual workaround. Thanks for Google Assistatant forum:
Steps:
1 - Download the gactions cli at https://developers.google.com/actions/tools/gactions-cli
2 - Authenticate with any command:
./gactions list --project [YOUT_PROJECT_ID]
3 - Download the json representation of your action:
./gactions get --project [YOUR_PROJECT_ID] --version draft > action.json
4 - Edit the json. Extract the only object from its array, remove the nested “googleFulfillments” object:
"googleFulfillments": [
{
"endpoint": {
"baseUrl": "[URL]"
},
"name": "actions.fulfillment.devices"
}
],
5 - Delete the brackets "[ ]" on the top and end of file. Only one language can be activated at a time. Delete all data from the action.json file unnecessary. The file looks like this, with its parameters:
{
"accountLinking": {
"accessTokenUrl": "xxxx",
"assertionTypes": [
"ID_TOKEN"
],
"authenticationUrl": "xxx",
"clientId": "xxx",
"clientSecret": "xxxx",
"grantType": "AUTH_CODE"
},
"actions": [
{
"description": "Smart home action for project xxxxxxx",
"fulfillment": {
"conversationName": "AoGSmartHomeConversation_xxxxxx"
},
"name": "actions.devices"
}
],
"conversations": {
"AoGSmartHomeConversation_xxxxxxxx": {
"name": "",
"url": "xxxxxxx"
}
},
"locale": "en",
"manifest": {
"category": "xxx",
"companyName": "xxx",
"contactEmail": "xxx",
"displayName": "xxx",
"largeLandscapeLogoUrl": "xxxxxx",
"longDescription": "xxxx",
"privacyUrl": "xxx",
"shortDescription": "xxxx",
"smallSquareLogoUrl": "xxxx",
"termsOfServiceUrl": "xxxxx",
"testingInstructions": "xxxxx"
}
}
6 - If you have updated the URL of fulfillment, authentication or token, go to Google Actions Console and update his entry on there;
7 - Push your fixed action into test:
./gactions test --project [YOUR_PROJECT_ID] --action_package ./action.json
This replaces the step " Click Simulator under TEST" in the google assistant manual setup. It worked for me!
More help here: https://community.home-assistant.io/t/google-assistant-trouble-shooting/99223/142

Resources