Access Caddy server API from http - http-proxy

I'm running Caddy server on an EC2 instance.
Write now I'm able to write config JSON inside the config vim app.json and load it from the SSH terminal.
curl localhost:2019/load -H 'Content-Type: application/json' -d #app.json
Now I want to load the configuration from another server over HTTP. Thus I have added the admin configuration to the app.json
{
"admin": {
"disabled": false,
"enforce_origin": false,
"origins": ["localhost:2019","103.55.1.2:2019","54.190.1.2:2019"]
},
"apps": {
"HTTP": {
"servers": {
"scanning": {
"listen": [":443"],
"routes": [{
"handle": [{
"handler": "file_server",
"root": "/var/www/html/app-frontend"
}],
"match": [{
"host": ["caddy.example.com"]
}]
}]
}
}
}
}
}
Where the IP address
103.55.1.2: My ISP IP address
54.190.1.2: The EC2 private IP address
I'm trying to get the config from the postman using the EC2 IP address but it does not work.
http://54.190.1.2:2019/config/
How can I get the config and load config in Caddy over HTTP?

Related

apache/apisix working mocking plugin example

I've installed apisix and apisix-dashboard with helm on my k8s cluster.
I used all defaults except APIKEY for admin and viewer acc., and custom username/password for dashboard. So I'm currently running the 2.15 version.
My installation steps
helm repo add apisix https://charts.apiseven.com
helm repo update
# installing apisix/apisix
helm install --set-string admin.credentials.admin="new_api_key"
--set-string admin.credentials.viewer="new_api_key" apisix apisix/apisix --create-namespace --namespace my-apisix
# installing apisix/apisix-dashboard, where values.yaml contains username/password
helm install -f values.yaml apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace my-apisix
I'm unable to configure the mocking plugin, I've been following the docs.
In the provided example I'm unable to call the API on route with ID 1, so I've created a custom route and after that used the VIEW json, where I've changed the configuration accordingly to the sample provided.
All calls on this routes are returning 502 errors, in the logs i can see the route is routing traffic to a non existing server. All of that leads me to believe that the mocking plugin is disabled.
Example of my route:
{
"uri": "/mock-test.html",
"name": "mock-sample-read",
"methods": [
"GET"
],
"plugins": {
"mocking": {
"content_type": "application/json",
"delay": 1,
"disable": false,
"response_schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"properties": {
"a": {
"type": "integer"
},
"b": {
"type": "integer"
}
},
"required": [
"a",
"b"
],
"type": "object"
},
"response_status": 200,
"with_mock_header": true
}
},
"upstream": {
"nodes": [
{
"host": "127.0.0.1",
"port": 1980,
"weight": 1
}
],
"timeout": {
"connect": 6,
"send": 6,
"read": 6
},
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"keepalive_pool": {
"idle_timeout": 60,
"requests": 1000,
"size": 320
}
},
"status": 1
}
Can anyone provide me with an actual working example or point out what I'm missing? Any suggestions are welcomed.
EDIT:
Looking at the logs of the apache/apisix:2.15.0-alpine it looks like this mocking plugin is disabled. Looking at the docs The mocking Plugin is used for mocking an API. When executed, it returns random mock data in the format specified and the request is not forwarded to the Upstream.
Error logs where I've changed the domain and IP addr. suggest that the traffic is being redirected to the upstream:
10.10.10.24 - - [23/Sep/2022:11:33:16 +0000] my.domain.com "GET /mock-test.html HTTP/1.1" 502 154 0.001 "-" "PostmanRuntime/7.29.2" 127.0.0.1:1980 502 0.001 "http://my.domain.com"
Globally plugins are enabled, I've tested using the Keycloak plugin.
EDIT 2: Could this be a bug in version 2.15 of apisix? There is currently no open issue on the github repo.
yes, mocking plugin is not enabled.
you can just add it here.
https://github.com/apache/apisix-helm-chart/blob/7ddeca5395a2de96acd06bada30f3ab3580a6252/charts/apisix/values.yaml#L219-L269
You can also submit a PR directly to fix it

web app works locally and on app engine, but not on cloud run

So I've run into this issue with a web app I've made:
it gets a file path as input
if the file exists on a bucket, it uses a python client api to create a compute engine instance
it passes the file path to the instance in the startup script
When I ran it locally, I created a python virtual environment and then ran the app. When I make the input on the web browser, the virtual machine is created by the api call. I assumed it used my personal account. I changed to the service account in the command line with this command 'gcloud config set account', it ran fine once more.
When I simply go to the source code directory deploy it as is, the application can create the virtual machine instances as well.
When I use Google cloud build and deploy to cloud run, it doesn't create the vm instance.
the web app itself is not throwing any errors, but when I check compute engine's logs, there is an error in the logs:
`{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "INVALID_PARAMETER"
},
"authenticationInfo": {
"principalEmail": "####"
},
"requestMetadata": {
"callerIp": "#####",
"callerSuppliedUserAgent": "(gzip),gzip(gfe)"
},
"serviceName": "compute.googleapis.com",
"methodName": "v1.compute.instances.insert",
"resourceName": "projects/someproject/zones/somezone/instances/nameofinstance",
"request": {
"#type": "type.googleapis.com/compute.instances.insert"
}
},
"insertId": "######",
"resource": {
"type": "gce_instance",
"labels": {
"instance_id": "#####",
"project_id": "someproject",
"zone": "somezone"
}
},
"timestamp": "2021-06-16T12:18:21.253551Z",
"severity": "ERROR",
"logName": "projects/someproject/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operation-#####",
"producer": "compute.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-06-16T12:18:21.253551Z"
}`
In theory, it is the same exact code that worked from my laptop and on app engine. I'm baffled why it only does this for cloud run.
App engines default service account was stripped of all its roles and given a custom role tailored to the web apps function.
The cloud run is using a different service account, but was given that exact same custom role.
Here is the method I use to call the api.
def create_instance(path):
compute = googleapiclient.discovery.build('compute', 'v1')
vmname = "piinnuclei" + date.today().strftime("%Y%m%d%H%M%S")
startup_script = "#! /bin/bash\napt update\npip3 install pg8000\nexport BUCKET_PATH=my-bucket/{}\ngsutil -m cp -r gs://$BUCKET_PATH /home/connor\ncd /home/connor\n./cloud_sql_proxy -dir=cloudsql -instances=sql-connection-name=unix:sql-connection-name &\npython3 run_analysis_upload.py\nexport ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')\nexport NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')\ngcloud --quiet compute instances delete $NAME --zone=$ZONE".format(path)
config = {
"kind": "compute#instance",
"name": vmname,
"zone": "projects/my-project/zones/northamerica-northeast1-a",
"machineType": "projects/my-project/zones/northamerica-northeast1-a/machineTypes/e2-standard-4",
"displayDevice": {
"enableDisplay": False
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": startup_script
}
]
},
"tags": {
"items": []
},
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": vmname,
"initializeParams": {
"sourceImage": "projects/my-project/global/images/my-image",
"diskType": "projects/my-project/zones/northamerica-northeast1-a/diskTypes/pd-balanced",
"diskSizeGb": "100"
},
"diskEncryptionKey": {}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"kind": "compute#networkInterface",
"subnetwork": "projects/my-project/regions/northamerica-northeast1/subnetworks/default",
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"networkTier": "PREMIUM"
}
],
"aliasIpRanges": []
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True,
"nodeAffinities": []
},
"deletionProtection": False,
"reservationAffinity": {
"consumeReservationType": "ANY_RESERVATION"
},
"serviceAccounts": [
{
"email": "batch-service-accountg#my-project.iam.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"shieldedInstanceConfig": {
"enableSecureBoot": False,
"enableVtpm": True,
"enableIntegrityMonitoring": True
},
"confidentialInstanceConfig": {
"enableConfidentialCompute": False
}
}
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
The issue was with the zone. For some reason, when it was ran on cloud run, the code below was the culprit.
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
"northamerica-northeast1" should have been "northamerica-northeast1-a"
EDIT:
I made a new virtual machine image and quickly ran into the same problem, it would work locally and break down in the cloud run environment. After letting it sit for some time, it began to work again. This is leading me to the conclusion that there is also some sort of delay before it can be called by cloud run.

Is it possible to register ec2 node in elbv2 target group via CF template?

I use AWS::ELBv2 with set of rules which redirect requests to different services running on several tcp ports on my EC2 instance. My application doesn't support scaling, so I can't use autoscaling group, I need just ELBv2 attached to my ec2-instance. We are using CloudFormation for deployment automation purposes.
With Autoscaling I can use TargetGroupARNs property of AutoscalingCluster:
"AutoScalingCluster": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"TargetGroupARNs": [
{
"Ref": "FirstappTargetGroup"
},
{
"Ref": "SecondappTargetGroup"
}
],
...
}
However for AWS::EC2::Instance there is no such property (according to https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html) and also there are no properties related to Target Groups.
It is possible to register nodes in Target Group after stack is created, however that's additional step which might be not convenient.
Found solution here: https://github.com/getcft/aws-elb-to-ec2-target-group-cf-template/blob/master/elb-to-ec2-target-group-cf-template.yml#L338
You can specify ec2 nodes as Targets: in AWS::ElasticLoadBalancingV2::TargetGroup
"MyTargetGroup": {
"Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"Port": 80,
"Protocol": "HTTP",
"VpcId": {
"Ref": "VpcId"
},
"Targets": [
{
"Id": {
"Ref": "MyEC2Node"
}
}
]
}
}
So it's not EC2 node property, it's a TargetGroup property in this case.
Also aws api doesn't show that property via aws elbv2 describe-target-groups

Azure Iotedge start docker with --net=host so that I can access my IP

I would like in my Java code to find my IP address. My code is inside a docker container and I always get the IP address of the docker container instead of my machine.
I run the docker like this
docker run -p 8080:8080 --privileged --net=host -d 6b45f71550a3
This is my Java code
InetAddress addr = InetAddress.getLocalHost();
String hostname = InetAddress.getByName(addr.getHostName()).toString();
I need to modify the deployment.template.json so that the generated docker does take the IP Address of the machine
"modules": {
"MyModule": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "dev.azurecr.io/dev:0.0.1-arm32v7",
"createOptions": {
"ExposedPorts":{"8080/tcp": {}},
"HostConfig": {
"PortBindings": {
"8080/tcp": [
{
"HostPort": "8080"
}
]
}
}
}
}
}
}
I was going to say that you can't do that but apparently you can by using
"createOptions": {
"NetworkingConfig": {
"EndpointsConfig": {
"host": {}
}
},
"HostConfig": {
"NetworkMode": "host"
}
}
I haven't tried it. I found it here: https://github.com/Azure/iot-edge-v1/issues/517. Maybe that will help.

Failure to connect to proxy "Certificate signed by unknown authority"

I'm attempting to connect to a CloudSQL instance via a cloudsql-proxy container on my Kubernetes deployment. I have the cloudsql credentials mounted and the value of GOOGLE_APPLICATION_CREDENTIALS set.
However, I'm still receiving the following error in my logs:
2018/10/08 20:07:28 Failed to connect to database: Post https://www.googleapis.com/sql/v1beta4/projects/[projectID]/instances/[appName]/createEphemeral?alt=json&prettyPrint=false: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: x509: certificate signed by unknown authority
My connection string looks like this:
[dbUser]:[dbPassword]#cloudsql([instanceName])/[dbName]]?charset=utf8&parseTime=True&loc=Local
And the proxy dialer is shadow-imported as:
_ github.com/GoogleCloudPlatform/cloudsql-proxy/proxy/dialers/mysql
Anyone have an idea what might be missing?
EDIT:
Deployment Spec looks something like this (JSON formatted):
{
"replicas": 1,
"selector": {
...
},
"template": {
...
"spec": {
"containers": [
{
"image": "[app-docker-imager]",
"name": "...",
"env": [
...
{
"name": "MYSQL_PASSWORD",
...
},
{
"name": "MYSQL_USER",
...
},
{
"name": "GOOGLE_APPLICATION_CREDENTIALS",
"value": "..."
}
],
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
},
{
"command": [
"/cloud_sql_proxy",
"-instances=...",
"-credential_file=..."
],
"image": "gcr.io/cloudsql-docker/gce-proxy:1.11",
"name": "...",
"ports": [
{
"containerPort": 3306,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
}
],
"volumes": [
{
"name": "[secrets-mount-name]",
"secret": {
"defaultMode": 420,
"secretName": "[secrets-mount-name]"
}
}
]
}
}
}
The error message indicates that your client is not able to trust the certificate of https://www.googleapis.com. There are two possible causes for this:
Your client does not know what root certificates to trust. The official cloudsql-proxy docker image includes root certificates, so if you are using that image, this is not your problem. If you are not using that image, you should (or at least install ca certificates in your image).
Your outbound traffic is being intercepted by a proxy server that is using a different, untrusted, certificate. This might be malicious (in which case you need to investigate who is intercepting your traffic). More benignly, you might be in a organization using an outbound proxy to inspect traffic according to policy. If this is the case, you should build a new docker image that includes the CA certificate used by your organization's outbound proxy.

Resources