I have a simple Spring Config Server application which consumes the configuration data from a GIT repository. This Config Server works perfectly as expected in my local and development environment. Once deployed to the production server though, I kept seeing this error: org.springframework.cloud.config.server.environment.NoSuchLabelException: No such label: master
Here is the whole JSON return:
{
"status": "DOWN",
"configServer": {
"status": "DOWN",
"repository": {
"application": "app",
"profiles": "default"
},
"error": "org.springframework.cloud.config.server.environment.NoSuchLabelException: No such label: master"
},
"discoveryComposite": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN",
"discoveryClient": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN"
}
},
"diskSpace": {
"status": "UP",
"total": 10434699264,
"free": 6599856128,
"threshold": 10485760
},
"refreshScope": {
"status": "UP"
},
"hystrix": {
"status": "UP"
}
}
So I traced it down to the spring-cloud-config GitHub repo, and saw that it is being thrown here: https://github.com/spring-cloud/spring-cloud-config/blob/b7afa2bb641913b89e32ae258bd6ec442080b9e6/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/JGitEnvironmentRepository.java#185 This error is thrown by GitCommand class's call() method on line 235 when a Git branch is not found. But I can't for the life of me understand why!!! I have double-checked and verified that the "master" branch does indeed exist in the GIT repository for configuration properties.
The application properties for the Config Server are defined in a bootstrap.yml file, as follows:
server:
port: 8080
management:
context-path: /admin
endpoints:
enabled: false
health:
enabled: true
logging:
level:
com.netflix.discovery: 'OFF'
org.springframework.cloud: 'DEBUG'
eureka:
instance:
leaseRenewalIntervalInSeconds: 10
statusPageUrlPath: /admin/info
healthCheckUrlPath: /admin/health
spring:
application:
name: config-service
cloud:
config:
enabled: false
failFast: true
server:
git:
uri: 'https://some-spring-config-repo.git'
username: 'fakeuser'
password: 'fakepass'
basedir: "${catalina.home}/target/config"`
Any help would be most appreciated!!
Posting this in case someone else finds this uselful
I'm using the below setting in the properties file of the config server to indicate the "main" branch
spring.cloud.config.server.git.default-label=main
Additional info - my git repo is not public , I'm using the new personal access token based authentication instead of the older username/password authentication
Related
I've installed apisix and apisix-dashboard with helm on my k8s cluster.
I used all defaults except APIKEY for admin and viewer acc., and custom username/password for dashboard. So I'm currently running the 2.15 version.
My installation steps
helm repo add apisix https://charts.apiseven.com
helm repo update
# installing apisix/apisix
helm install --set-string admin.credentials.admin="new_api_key"
--set-string admin.credentials.viewer="new_api_key" apisix apisix/apisix --create-namespace --namespace my-apisix
# installing apisix/apisix-dashboard, where values.yaml contains username/password
helm install -f values.yaml apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace my-apisix
I'm unable to configure the mocking plugin, I've been following the docs.
In the provided example I'm unable to call the API on route with ID 1, so I've created a custom route and after that used the VIEW json, where I've changed the configuration accordingly to the sample provided.
All calls on this routes are returning 502 errors, in the logs i can see the route is routing traffic to a non existing server. All of that leads me to believe that the mocking plugin is disabled.
Example of my route:
{
"uri": "/mock-test.html",
"name": "mock-sample-read",
"methods": [
"GET"
],
"plugins": {
"mocking": {
"content_type": "application/json",
"delay": 1,
"disable": false,
"response_schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"properties": {
"a": {
"type": "integer"
},
"b": {
"type": "integer"
}
},
"required": [
"a",
"b"
],
"type": "object"
},
"response_status": 200,
"with_mock_header": true
}
},
"upstream": {
"nodes": [
{
"host": "127.0.0.1",
"port": 1980,
"weight": 1
}
],
"timeout": {
"connect": 6,
"send": 6,
"read": 6
},
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"keepalive_pool": {
"idle_timeout": 60,
"requests": 1000,
"size": 320
}
},
"status": 1
}
Can anyone provide me with an actual working example or point out what I'm missing? Any suggestions are welcomed.
EDIT:
Looking at the logs of the apache/apisix:2.15.0-alpine it looks like this mocking plugin is disabled. Looking at the docs The mocking Plugin is used for mocking an API. When executed, it returns random mock data in the format specified and the request is not forwarded to the Upstream.
Error logs where I've changed the domain and IP addr. suggest that the traffic is being redirected to the upstream:
10.10.10.24 - - [23/Sep/2022:11:33:16 +0000] my.domain.com "GET /mock-test.html HTTP/1.1" 502 154 0.001 "-" "PostmanRuntime/7.29.2" 127.0.0.1:1980 502 0.001 "http://my.domain.com"
Globally plugins are enabled, I've tested using the Keycloak plugin.
EDIT 2: Could this be a bug in version 2.15 of apisix? There is currently no open issue on the github repo.
yes, mocking plugin is not enabled.
you can just add it here.
https://github.com/apache/apisix-helm-chart/blob/7ddeca5395a2de96acd06bada30f3ab3580a6252/charts/apisix/values.yaml#L219-L269
You can also submit a PR directly to fix it
I have a micrometer + spring actuator in my application that shows in this way in kibana:
"name": "jvm_threads_daemon",
"type": "gauge",
"cluster": "cluster_app",
"kubernetes_namespace": "test",
"kubernetes_pod_name": "fargate-ip-10-121-31-148.ec2.internal",
"value": 18
But I would like to convert to this:
"jvm_threads_daemon": "18",
"type": "gauge",
"cluster": "cluster_app",
"kubernetes_namespace": "test",
"kubernetes_pod_name": "fargate-ip-10-121-31-148.ec2.internal",
"value": 18
So, note that I need the metric as a tag name and not inside "name". This is possible in micrometer?
See my application.yml:
management:
endpoints:
web:
exposure:
include: "*"
base-path: /actuator
path-mapping.health: health
endpoint:
health:
probes:
enabled: true
show-details: always
metrics:
export:
elastic:
host: "https://elastico.url:443"
index: "metricbeat-k8s-apps"
user-name: "elastic"
password: "xxxx"
step: 30s
connect-timeout: 10s
tags:
cluster: "cluster_app"
kubernetes.pod.name: ${NODE_NAME}
kubernetes.namespace: ${POD_NAMESPACE}
You can extend/implement your own ElasticMeterRegistry and define your own format.
One thing to consider though: if you do the change you want, how will you able to query/filter/aggregate metrics in Kibana?
Keycloak policy enforcer not working with a sample Sprint boot application.
I am using Keycloak version 6.0.1 and trying to integrate a sample Sprint boot application (Sprint boot version 2.1.3). My objective to setup policies and permissions in Keycloak and use Keycloak policy enforcer in my sample Spring boot application so that all authorization decisions are automatically enforced using appropriate permission defined in Keycloak and no code is required in Sample application.
My Sample spring boot application just prints a list of users from a in memory List:
public class JPAUserResource {
#Autowired
private UserRepository userRepo;
#GetMapping(path = "/jpausers")
public List<JPAUser> retrieveAllUsers() {
return userRepo.findAll();
}
}
My application.properties file has following content:
server.port=38080
spring.jpa.show-sql=true
spring.h2.console.enabled=true
logging.level.org.springframework.security=DEBUG
logging.level.org.keycloak.adapters.authorization=DEBUG
#Keycloak Configuration
keycloak.auth-server-url=http://192.168.154.190:18180/auth
keycloak.realm=master
keycloak.resource=login-app
keycloak.principal-attribute=preferred_username
keycloak.credentials.secret=195925d6-b258-407d-a65d-f1fd12d7a876
keycloak.policy-enforcer-config.enforcement-mode=enforcing
keycloak.realm-key=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAjyYRe6LxBxO9hVtr4ScsMCBp3aPE9qbJLptPIMQCZR6JhVhOxA1kxhRmVYHXR5pdwiQWU8MriRhAY1JGniG6GNS1+BL+JaUiaGxov4rpD2SIMdrs8YjjSoD3Z8wvsMAopzWG48i9T/ppNaqKTkDZHbHAXOYJn+lymQ4EqpQrJ1Uh+SUA8XcLvWUQ12ty9BieujudWhnAgQ4zxyJY3I8sZwjaRIxndzSlyPJo45lWzXkpqcl92eU/Max7LRM4WKqsUvu86DgqlXbJcz8T+GUeF30ONQDSLX9rwNIT4ZiCVMT7x6YfKXZW6jxC0UiXxZuT23xk8A9iCP4rC9xo1NfGTwIDAQAB
keycloak.policy-enforcer-config.paths[0].path=/jpausers
keycloak.policy-enforcer-config.paths[0].methods[0].method=GET
My Keycloak authorization settings are as below:
{
"allowRemoteResourceManagement": true,
"policyEnforcementMode": "ENFORCING",
"resources": [
{
"name": "Default Resource",
"type": "urn:login-app:resources:default",
"ownerManagedAccess": false,
"attributes": {},
"_id": "501febc8-f3e1-411f-aecf-376b4786c24e",
"uris": [
"/*"
]
},
{
"name": "jpausers",
"ownerManagedAccess": false,
"displayName": "jpausers",
"attributes": {},
"_id": "a8f691db-39ef-4b2c-80fb-37224e270f1e",
"uris": [
"/jpausers"
],
"scopes": [
{
"name": "GET"
},
{
"name": "POST"
}
]
}
],
"policies": [
{
"id": "94518189-3794-451c-9996-eec22543d802",
"name": "Default Policy",
"description": "A policy that grants access only for users within this realm",
"type": "js",
"logic": "POSITIVE",
"decisionStrategy": "AFFIRMATIVE",
"config": {
"code": "// by default, grants any permission associated with this policy\n$evaluation.grant();\n"
}
},
{
"id": "0242cf72-365d-49ae-8d5b-4ced24736f24",
"name": "test_jpa",
"type": "role",
"logic": "POSITIVE",
"decisionStrategy": "UNANIMOUS",
"config": {
"roles": "[{\"id\":\"jpa\",\"required\":false}]"
}
},
{
"id": "5c34e2b4-a56a-45f9-a1cc-94788bcb41b0",
"name": "test_perm1",
"type": "resource",
"logic": "POSITIVE",
"decisionStrategy": "UNANIMOUS",
"config": {
"resources": "[\"jpausers\"]",
"applyPolicies": "[\"test_jpa\"]"
}
}
],
"scopes": [
{
"id": "4ee351e6-7095-453a-a4f4-badbc9ec1ba0",
"name": "GET",
"displayName": "GET"
},
{
"id": "9119aab2-75a0-49d1-a076-8d9210c3e457",
"name": "POST",
"displayName": "POST"
}
]
}
When I send a request to my Rest API '/jpausers', it fails with following messages on console:
*19:17:52.044 [http-nio-38080-exec-1] INFO o.k.a.authorization.PolicyEnforcer - Paths provided in configuration.
19:17:52.045 [http-nio-38080-exec-1] DEBUG o.k.a.authorization.PolicyEnforcer - Trying to find resource with uri [/jpausers] for path [/jpausers].
19:17:52.151 [http-nio-38080-exec-1] DEBUG o.k.a.authorization.PolicyEnforcer - Initialization complete. Path configurations:
19:17:52.151 [http-nio-38080-exec-1] DEBUG o.k.a.authorization.PolicyEnforcer - PathConfig{name='null', type='null', path='/jpausers', scopes=[], id='a8f691db-39ef-4b2c-80fb-37224e270f1e', enforcerMode='ENFORCING'}
19:17:52.154 [http-nio-38080-exec-1] DEBUG o.k.a.authorization.PolicyEnforcer - Policy enforcement is enabled. Enforcing policy decisions for path [http://192.168.109.97:38080/jpausers].
19:17:52.156 [http-nio-38080-exec-1] DEBUG o.k.a.a.KeycloakAdapterPolicyEnforcer - Sending challenge
19:17:52.157 [http-nio-38080-exec-1] DEBUG o.k.a.authorization.PolicyEnforcer - Policy enforcement result for path [http://192.168.109.97:38080/jpausers] is : DENIED
19:17:52.157 [http-nio-38080-exec-1] DEBUG o.k.a.authorization.PolicyEnforcer - Returning authorization context with permissions:*
UMA Authorization is disabled. I had first retrieved access token using Openid Connect token API with Password credentials grant type and then I am trying to access my Rest API '/jpausers' with access token.
Can someone help with the issue here ? How do I fix this ? Do I have to enable UMA to make policy enforcer work ?
With quick look, I can see your mapping is not complete in application.properties, you have not mapped your HTTP method to scope you have configured in keycloak. Some thing like this
keycloak.policy-enforcer-config.paths[0].path=/jpausers
keycloak.policy-enforcer-config.paths[0].methods[0].method=GET
keycloak.policy-enforcer-config.paths[0].methods[0].scopes[0]=GET
I think you are missing keycloak.securityConstraints[0].securityCollections[0].name= jpausers
I had the same issue and I was able to resolve it with similar settings in my application properties yaml file as shown below:
keycloak:
security-constraints:
- auth-roles:
- "*"
security-collections:
- name:
patterns:
- /*
It working for me on Keycloak 19.0.1
keycloak.securityConstraints[0].authRoles[0]=*
keycloak.securityConstraints[0].securityCollections[0].patterns[0]=/
keycloak.securityConstraints[0].securityCollections[0].name= test
keycloak.policy-enforcer-config.on-deny-redirect-to=/403
I'm attempting to connect to a CloudSQL instance via a cloudsql-proxy container on my Kubernetes deployment. I have the cloudsql credentials mounted and the value of GOOGLE_APPLICATION_CREDENTIALS set.
However, I'm still receiving the following error in my logs:
2018/10/08 20:07:28 Failed to connect to database: Post https://www.googleapis.com/sql/v1beta4/projects/[projectID]/instances/[appName]/createEphemeral?alt=json&prettyPrint=false: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: x509: certificate signed by unknown authority
My connection string looks like this:
[dbUser]:[dbPassword]#cloudsql([instanceName])/[dbName]]?charset=utf8&parseTime=True&loc=Local
And the proxy dialer is shadow-imported as:
_ github.com/GoogleCloudPlatform/cloudsql-proxy/proxy/dialers/mysql
Anyone have an idea what might be missing?
EDIT:
Deployment Spec looks something like this (JSON formatted):
{
"replicas": 1,
"selector": {
...
},
"template": {
...
"spec": {
"containers": [
{
"image": "[app-docker-imager]",
"name": "...",
"env": [
...
{
"name": "MYSQL_PASSWORD",
...
},
{
"name": "MYSQL_USER",
...
},
{
"name": "GOOGLE_APPLICATION_CREDENTIALS",
"value": "..."
}
],
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
},
{
"command": [
"/cloud_sql_proxy",
"-instances=...",
"-credential_file=..."
],
"image": "gcr.io/cloudsql-docker/gce-proxy:1.11",
"name": "...",
"ports": [
{
"containerPort": 3306,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
}
],
"volumes": [
{
"name": "[secrets-mount-name]",
"secret": {
"defaultMode": 420,
"secretName": "[secrets-mount-name]"
}
}
]
}
}
}
The error message indicates that your client is not able to trust the certificate of https://www.googleapis.com. There are two possible causes for this:
Your client does not know what root certificates to trust. The official cloudsql-proxy docker image includes root certificates, so if you are using that image, this is not your problem. If you are not using that image, you should (or at least install ca certificates in your image).
Your outbound traffic is being intercepted by a proxy server that is using a different, untrusted, certificate. This might be malicious (in which case you need to investigate who is intercepting your traffic). More benignly, you might be in a organization using an outbound proxy to inspect traffic according to policy. If this is the case, you should build a new docker image that includes the CA certificate used by your organization's outbound proxy.
I am using Spring Boot / Cloud / Consul (1.0.0.M2 and have tried current code as of 10/6/2015). I'm trying to register/deregister a service that uses a dynamic port and a dynamic id.
I have the following bootstrap:
spring:
cloud:
consul:
config:
enabled: true
host: localhost
port: 8500
And application.yml
spring:
main:
show-banner: false
application:
name: helloService
cloud:
consul:
config:
prefix: config
defaultContext: helloService
discovery:
instanceId: ${spring.application.name}:${spring.application.instance.id:${random.value}}
healthCheckPath: /${spring.application.name}/health
healthCheckInterval: 15s
endpoints:
shutdown:
enabled: true
And in the Key Values under config/application/server.port = 0 for a dynamic port.
The service is registered correctly during startup:
{
"consul": {
"ID": "consul",
"Service": "consul",
"Tags": [],
"Address": "",
"Port": 8300
},
"helloService-6596692c4e8af31ddd1589b0d359899f": {
"ID": "helloService-6596692c4e8af31ddd1589b0d359899f",
"Service": "helloService",
"Tags": [],
"Address": "",
"Port": 50307
} }
After issuing the shutdown:
curl http://localhost:50307/shutdown -X POST
{"message":"Shutting down, bye..."}
The service is still registered and the health check starts failing.
{
"consul": {
"ID": "consul",
"Service": "consul",
"Tags": [],
"Address": "",
"Port": 8300
},
"helloService-6596692c4e8af31ddd1589b0d359899f": {
"ID": "helloService-6596692c4e8af31ddd1589b0d359899f",
"Service": "helloService",
"Tags": [],
"Address": "",
"Port": 50307
} }
What is missing?