Spring Cloud Gateway API - Context-path on routes not working - spring

I have setup context-path in application.yml
server:
port: 4177
max-http-header-size: 65536
tomcat.accesslog:
enabled: true
servlet:
context-path: /gb-integration
And I have configured some routes
#Bean
public RouteLocator routeLocator(RouteLocatorBuilder builder) {
final String sbl = "http://localhost:4178";
return builder.routes()
//gb-sbl-rest
.route("sbl", r -> r
.path("/sbl/**")
.filters(f -> f.rewritePath("/sbl/(?<segment>.*)", "/gb-sbl/${segment}"))
.uri(sbl)).build();
}
I want the API gateway to be reached using localhost:4177/gb-integration/sbl/**
However it is only working on localhost:4177/sbl/**
It seems my context-path is ignored.
Any ideas how I can get my context-path to work on all my routes?

You probably already figuered it out by your self, but here is what is working for me:
After reading the Spring Cloud documentation and having tryied many things on my own, I have eventually opted for a route by route configuration. In your case, it would look something like this:
.path("/gb-integration/sbl/**")
and repeat the same pattern for every route.
.path("/gb-integration/abc/**")
...
.path("/gb-integration/def/**")
You can actually see this in spring cloud documentation.
The spring clould documentation seems to be in progress. Hopefully, we shall find a better solution.

Detailing on #sendon1982 answer
If your service is exposed at localhost:8080/color/red and you want it to be accessible from gateway as localhost:9090/gateway/color/red, In the Path param of predicates, prepend the /gateway, and add StripPrefix as 1 in filters, which basically translates to
take the requested path which matches Path, strip/remove out the prefix paths till the number mentioned and route using given uri and the stripped path
my-app-gateway: /gateway
spring:
cloud:
gateway:
routes:
- id: color-service
uri: http://localhost:8080
predicates:
- Path=${my-app-gateway}/color/**
filters:
- StripPrefix=1

Using yaml file like this
spring:
cloud:
gateway:
routes:
- id: property-search-service-route
uri: http://localhost:4178
predicates:
- Path=/gb-integration/sbl/**

fixed :
application.yaml:
gateway:
discovery:
locator:
enabled: true
lower-case-service-id: true
filters:
# 去掉 /ierp/[serviceId] 进行转发
- StripPath=2
predicates:
- name: Path
# 路由匹配 /ierp/[serviceId]
# org.springframework.cloud.gateway.discovery.DiscoveryClientRouteDefinitionLocator#getRouteDefinitions
args[pattern]: "'/ierp/'+serviceId+'/**'"
filter:
#Component
public class StripPathGatewayFilterFactory extends
AbstractGatewayFilterFactory<StripPathGatewayFilterFactory.Config> {
/**
* Parts key.
*/
public static final String PARTS_KEY = "parts";
public StripPathGatewayFilterFactory() {
super(StripPathGatewayFilterFactory.Config.class);
}
#Override
public List<String> shortcutFieldOrder() {
return Arrays.asList(PARTS_KEY);
}
#Override
public GatewayFilter apply(Config config) {
return (exchange, chain) -> {
ServerHttpRequest request = exchange.getRequest();
ServerWebExchangeUtils.addOriginalRequestUrl(exchange, request.getURI());
String path = request.getURI().getRawPath();
String[] originalParts = StringUtils.tokenizeToStringArray(path, "/");
// all new paths start with /
StringBuilder newPath = new StringBuilder("/");
for (int i = 0; i < originalParts.length; i++) {
if (i >= config.getParts()) {
// only append slash if this is the second part or greater
if (newPath.length() > 1) {
newPath.append('/');
}
newPath.append(originalParts[i]);
}
}
if (newPath.length() > 1 && path.endsWith("/")) {
newPath.append('/');
}
ServerHttpRequest newRequest = request.mutate().path(newPath.toString()).contextPath(null).build();
exchange.getAttributes().put(ServerWebExchangeUtils.GATEWAY_REQUEST_URL_ATTR, newRequest.getURI());
return chain.filter(exchange.mutate().request(newRequest).build());
};
}
public static class Config {
private int parts;
public int getParts() {
return parts;
}
public void setParts(int parts) {
this.parts = parts;
}
}
}

Related

Spring Boot custom Kubernetes readiness probe

I want to implement custom logic to determine readiness for my pod, and I went over this: https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state and they mention an example property:
management.endpoint.health.group.readiness.include=readinessState,customCheck
Question is - how do I override customCheck?
In my case I want to use HTTP probes, so the yaml looks like:
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 10
httpGet:
path: /actuator/health
port: 12345
So then again - where and how should I apply logic that would determine when the app is ready (just like the link above, i'd like to rely on an external service in order for it to be ready)
customCheck is a key for your custom HealthIndicator. The key for a given HealthIndicator is the name of the bean without the HealthIndicator suffix
You can read:
https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.health.writing-custom-health-indicators
You are defining readinessProbe, so probably hiting /actuator/health/readiness is a better choice.
public class CustomCheckHealthIndicator extends AvailabilityStateHealthIndicator {
private final YourService yourService;
public CustomCheckHealthIndicator(ApplicationAvailability availability, YourService yourService) {
super(availability, ReadinessState.class, (statusMappings) -> {
statusMappings.add(ReadinessState.ACCEPTING_TRAFFIC, Status.UP);
statusMappings.add(ReadinessState.REFUSING_TRAFFIC, Status.OUT_OF_SERVICE);
});
this.yourService = yourService;
}
#Override
protected AvailabilityState getState(ApplicationAvailability applicationAvailability) {
if (yourService.isInitCompleted()) {
return ReadinessState.ACCEPTING_TRAFFIC;
} else {
return ReadinessState.REFUSING_TRAFFIC;
}
}
}

Set Prefix to Spring Micrometer Merics using Statsd and Datadog

I'm trying to Implement Custom Metrics Integration for my App. Using the following setup
// DogStatsd Metrics Integration with MicroMeter
implementation group: 'io.micrometer', name: 'micrometer-registry-statsd', version: '1.7.2'
Custom Spring Configuration Added for the application
#Configuration
public class MetricConfiguration {
private final MeterRegistry meterRegistry;
#Value("${management.metrics.export.statsd.prefix}")
private String prefix;
#Autowired
public MetricConfiguration(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
}
#Bean
public MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> registry.config().meterFilter(new MeterFilter() {
#Override
public Meter.Id map(Meter.Id id) {
if (!id.getName().startsWith(prefix)) {
return id.withName(prefix + "." + id.getName());
} else {
return id;
}
}
});
}
#Bean
public TimedAspect timedAspect() {
return new TimedAspect(meterRegistry);
}
}
YAML Configuration for Metrics
management:
metrics:
enable:
jvm: false
process: false
tomcat: false
system: false
logback: false
distribution:
slo:
http.server.requests: 50ms
percentiles-histogram:
http.server.requests: true
percentiles:
http.server.requests: 0.99
export:
statsd:
enabled: false
flavor: datadog
host: ${DD_AGENT_HOST}
port: 8125
prefix: ${spring.application.name}
endpoints:
enabled-by-default: true
web:
exposure:
include: "*"
endpoint:
metrics:
enabled: true
health:
enabled: true
show-components: "always"
show-details: "always"
Trying to set the prefix to all the custom metrics, but after setting the prefix the excluded metrics are breaking are started showing in the /actuator/metrics response.
The response looks like below:
{
names: [
"my-service.http.server.requests",
"my-service.jvm.buffer.count",
"my-service.jvm.buffer.memory.used",
"my-service.jvm.buffer.total.capacity",
"my-service.jvm.classes.loaded",
"my-service.jvm.classes.unloaded",
"my-service.jvm.gc.live.data.size",
"my-service.jvm.gc.max.data.size",
"my-service.jvm.gc.memory.allocated",
"my-service.logback.events",
"my-service.process.cpu.usage",
"my-service.process.files.max",
"my-service.process.files.open",
"my-service.process.start.time",
"my-service.process.uptime",
"my-service.system.cpu.count",
"my-service.system.cpu.usage",
"my-service.system.load.average.1m",
"my-service.tomcat.sessions.active.current",
"my-service.tomcat.sessions.active.max",
"my-service.tomcat.sessions.alive.max",
"my-service.tomcat.sessions.created",
"my-service.tomcat.sessions.expired",
"my-service.tomcat.sessions.rejected"
]
}

shiro buji pac4j cas single sign out not work

spring boot 2.2.5
shiro-spring-boot-web-starter 1.5.1
buji-pac4j 4.1.1
pac4j-cas 3.8.3
cas overlay template 5.3.
I start cas server in tomcat with https, and start two clients(pac4j1 and pac4j2) in eclipse.
single sign on works, but single sign out failed.
Following are my configs:
I only added one service file under cas server which looks like:
{
"#class": "org.apereo.cas.services.RegexRegisteredService",
"serviceId": "^(http)://localhost.*",
"name": "local",
"id": 10000003,
"evaluationOrder": 1
}
application.yml of pac4j1:
server:
port: 8444
servlet:
context-path: /pac4j1
cas:
client-name: pac4j1Client
server:
url: https://localhost:8443/cas
project:
url: http://localhost:8444/pac4j1
Pac4jConfig:
#Configuration
public class Pac4jConfig {
#Value("${cas.server.url}")
private String casServerUrl;
#Value("${cas.project.url}")
private String projectUrl;
#Value("${cas.client-name}")
private String clientName;
#Bean("authcConfig")
public Config config(CasClient casClient, ShiroSessionStore shiroSessionStore) {
Config config = new Config(casClient);
config.setSessionStore(shiroSessionStore);
return config;
}
#Bean
public ShiroSessionStore shiroSessionStore(){
return new ShiroSessionStore();
}
#Bean
public CasClient casClient(CasConfiguration casConfig){
CasClient casClient = new CasClient(casConfig);
casClient.setCallbackUrl(projectUrl + "/callback?client_name=" + clientName);
casClient.setName(clientName);
return casClient;
}
#Bean
public CasConfiguration casConfig(){
final CasConfiguration configuration = new CasConfiguration();
configuration.setLoginUrl(casServerUrl + "/login");
configuration.setProtocol(CasProtocol.CAS20);
configuration.setAcceptAnyProxy(true);
configuration.setPrefixUrl(casServerUrl + "/");
return configuration;
}
}
shiro config:
#Configuration
public class ShiroConfig {
#Value("${cas.project.url}")
private String projectUrl;
#Value("${cas.server.url}")
private String casServerUrl;
#Value("${cas.client-name}")
private String clientName;
#Bean("securityManager")
public DefaultWebSecurityManager securityManager(Pac4jSubjectFactory subjectFactory, CasRealm casRealm){
DefaultWebSecurityManager manager = new DefaultWebSecurityManager();
manager.setRealm(casRealm);
manager.setSubjectFactory(subjectFactory);
return manager;
}
#Bean
public CasRealm casRealm(){
CasRealm realm = new CasRealm();
realm.setClientName(clientName);
realm.setCachingEnabled(false);
realm.setAuthenticationCachingEnabled(false);
realm.setAuthorizationCachingEnabled(false);
return realm;
}
#Bean
public Pac4jSubjectFactory subjectFactory(){
return new Pac4jSubjectFactory();
}
#Bean
public FilterRegistrationBean<SingleSignOutFilter> singleSignOutFilter() {
FilterRegistrationBean<SingleSignOutFilter> bean = new FilterRegistrationBean<SingleSignOutFilter>();
bean.setName("singleSignOutFilter");
SingleSignOutFilter singleSignOutFilter = new SingleSignOutFilter();
singleSignOutFilter.setCasServerUrlPrefix(casServerUrl);
singleSignOutFilter.setIgnoreInitConfiguration(true);
bean.setFilter(singleSignOutFilter);
bean.addUrlPatterns("/*");
bean.setEnabled(true);
bean.setOrder(Ordered.HIGHEST_PRECEDENCE);
return bean;
}
#Bean
public FilterRegistrationBean<DelegatingFilterProxy> filterRegistrationBean() {
FilterRegistrationBean<DelegatingFilterProxy> filterRegistration = new FilterRegistrationBean<DelegatingFilterProxy>();
filterRegistration.setFilter(new DelegatingFilterProxy("shiroFilter"));
filterRegistration.addInitParameter("targetFilterLifecycle", "true");
filterRegistration.setEnabled(true);
filterRegistration.addUrlPatterns("/*");
filterRegistration.setDispatcherTypes(DispatcherType.REQUEST, DispatcherType.FORWARD);
return filterRegistration;
}
private void loadShiroFilterChain(ShiroFilterFactoryBean shiroFilterFactoryBean){
Map<String, String> filterChainDefinitionMap = new LinkedHashMap<>();
filterChainDefinitionMap.put("/", "securityFilter");
filterChainDefinitionMap.put("/index", "securityFilter");
filterChainDefinitionMap.put("/callback", "callbackFilter");
filterChainDefinitionMap.put("/logout", "logout");
filterChainDefinitionMap.put("/**","anon");
shiroFilterFactoryBean.setFilterChainDefinitionMap(filterChainDefinitionMap);
}
#Bean("shiroFilter")
public ShiroFilterFactoryBean factory(DefaultWebSecurityManager securityManager, Config config) {
ShiroFilterFactoryBean shiroFilterFactoryBean = new ShiroFilterFactoryBean();
shiroFilterFactoryBean.setSecurityManager(securityManager);
loadShiroFilterChain(shiroFilterFactoryBean);
Map<String, Filter> filters = new HashMap<>(3);
SecurityFilter securityFilter = new SecurityFilter();
securityFilter.setConfig(config);
securityFilter.setClients(clientName);
filters.put("securityFilter", securityFilter);
MyCallbackFilter callbackFilter = new MyCallbackFilter();
callbackFilter.setConfig(config);
callbackFilter.setDefaultUrl(projectUrl);
filters.put("callbackFilter", callbackFilter);
LogoutFilter logoutFilter = new LogoutFilter();
logoutFilter.setConfig(config);
logoutFilter.setCentralLogout(true);
logoutFilter.setLocalLogout(true);
logoutFilter.setDefaultUrl(projectUrl + "/callback?client_name=" + clientName);
filters.put("logout",logoutFilter);
shiroFilterFactoryBean.setFilters(filters);
return shiroFilterFactoryBean;
}
}
application.properties of cas server is default, and cas server use https(https://localhost:8443/cas) while cas clients are http(http://localhost:8444/pac4j1).
Where am I wrong?
with the help of the link SLO which provided by leopal, i know that cas server need to send log out request back to client.
Hence, i checked the log of cas server and found INFO [org.apereo.cas.logout.DefaultLogoutManager] - <Performing logout operations for.
so i added log for org.apereo.cas.logout and found that there are some classes about logout: DefaultLogoutManager, DefaultSingleLogoutServiceLogoutUrlBuilder, DefaultSingleLogoutServiceMessageHandler and SimpleUrlValidator.
when performing logout, DefaultSingleLogoutServiceLogoutUrlBuilder.determineLogoutUrl will get the logout url from registered service or get the original url from cas client if original url is a valid url.
So my problem is : i didn't define logout url in service json file and the original url from cas client is localhost:8444 which is a invalid ipv4. As a result, cas server will not send logout request back to client.
Solution is : use ip in project url instead of localhost in application.yml of cas client:
cas:
client-name: pac4j1Client
server:
url: https://localhost:8443/cas
project:
url: http://192.168.2.119:8444/pac4j1
another solution is set logoutUrl for each cas client service json file(not tried yet).

Spring Cloud Stream Kafka Channel Not Working in Spring Boot Application

I have been attempting to get an inbound SubscribableChannel and outbound MessageChannel working in my spring boot application.
I have successfully setup the kafka channel and tested it successfully.
Furthermore I have create a basic spring boot application that tests adding and receiving things from the channel.
The issue I am having is when I put the equivalent code in the application it belongs in, it appears that the messages never get sent or received. By debugging it's hard to ascertain what's going on but the only thing that looks different to me is the channel-name. In the working impl the channel name is like application.channel in the non working app its localhost:8080/channel.
I was wondering if there is some spring boot configuration blocking or altering the creation of the channels into a different channel source?
Anyone had any similar issues?
application.yml
spring:
datasource:
url: jdbc:h2:mem:dpemail;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
platform: h2
username: hello
password:
driverClassName: org.h2.Driver
jpa:
properties:
hibernate:
show_sql: true
use_sql_comments: true
format_sql: true
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
contentType: application/json
email-out:
destination: email
contentType: application/json
Email
public class Email {
private long timestamp;
private String message;
public long getTimestamp() {
return timestamp;
}
public void setTimestamp(long timestamp) {
this.timestamp = timestamp;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
}
Binding Config
#EnableBinding(EmailQueues.class)
public class EmailQueueConfiguration {
}
Interface
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
Controller
#RestController
#RequestMapping("/queue")
public class EmailQueueController {
private EmailQueues emailQueues;
#Autowired
public EmailQueueController(EmailQueues emailQueues) {
this.emailQueues = emailQueues;
}
#RequestMapping(value = "sendEmail", method = POST)
#ResponseStatus(ACCEPTED)
public void sendToQueue() {
MessageChannel messageChannel = emailQueues.outboundEmails();
Email email = new Email();
email.setMessage("hello world: " + System.currentTimeMillis());
email.setTimestamp(System.currentTimeMillis());
messageChannel.send(MessageBuilder.withPayload(email).setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON).build());
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(#Payload Email email) {
System.out.println("received: " + email.getMessage());
}
}
I'm not sure if one of the inherited configuration projects using Spring-Cloud, Spring-Cloud-Sleuth might be preventing it from working, but even when I remove it still doesnt. But unlike my application that does work with the above code I never see the ConsumeConfig being configured, eg:
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 100
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id = consumer-2
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
(This configuration is what I see in my basic Spring Boot application when running the above code and the code works writing and reading from the kafka channel)....
I assume there is some over spring boot configuration from one of the libraries I'm using creating a different type of channel I just cannot find what that configuration is.
What you posted contains a lot of unrelated configuration, so hard to determine if anything gets in the way. Also, when you say "..it appears that the messages never get sent or received.." are there any exceptions in the logs? Also, please state the version of Kafka you're using as well as Spring Cloud Stream.
Now, I did try to reproduce it based on your code (after cleaning up a bit to only leave relevant parts) and was able to successfully send/receive.
My Kafka version is 0.11 and Spring Cloud Stream 2.0.0.
Here is the relevant code:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
email-out:
destination: email
#SpringBootApplication
#EnableBinding(KafkaQuestionSoApplication.EmailQueues.class)
public class KafkaQuestionSoApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaQuestionSoApplication.class, args);
}
#Bean
public ApplicationRunner runner(EmailQueues emailQueues) {
return new ApplicationRunner() {
#Override
public void run(ApplicationArguments args) throws Exception {
emailQueues.outboundEmails().send(new GenericMessage<String>("Hello"));
}
};
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(String payload) {
System.out.println("received: " + payload);
}
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
}
Okay so after a lot of debugging... I discovered that something is creating a Test Support Binder (how don't know yet) so obviously this is used to not impact add messages to a real channel.
After adding
#SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
The kafka channel configurations have worked and messages are adding.. would be interesting to know what on earth is setting up this test support binder.. I'll find that sucker eventually.

How to do URL Rewrite in Zuul Proxy?

One of the request that comes to my Zuul Filter is of URI /hello/World which i want to redirect to /myapp/test. This /myapp/test is a service that is registered in Eureka.
zuul:
routes:
xyz:
path: /hello/World
url: http://localhost:1234/myapp/test
stripPrefix: true
When i try the above configuration, the incoming URI is suffixed to the configured URL like http://localhost:1234/myapp/test/World . Few of the links which i came across seem to be stating that URL Rewrite feature is not yet available in Zuul.
Is there any other way this can be done at the Zuul Layer ?
Note: At this point of time, i cannot do this reverse proxying in the Webserver or any other layer since, my Zuul filter is the one that is receiving the request directly.
Using #Adelin solution, with little improvements
Use 'url' property as path to prepend for customizing the Url rewriting (I have disabled Eureka in my example) :
ribbon.eureka.enabled=false
zuul.routes.route1.path=/route1/**
zuul.routes.route1.serviceId=service1
zuul.routes.route1.url=/path/to/prepend
service1.ribbon.listOfServers=http://server1
Then implement the following filter :
/**
* Fixing missing URL rewriting when using ribbon
*/
#Component
public class CustomPathZuulFilter extends ZuulFilter {
#Autowired
private ZuulProperties zuulProperties;
#Override
public String filterType() {
return FilterConstants.PRE_TYPE;
}
#Override
public int filterOrder() {
return FilterConstants.PRE_DECORATION_FILTER_ORDER + 1;
}
#Override
public boolean shouldFilter() {
// override PreDecorationFilter only if executed previously successfully
return RequestContext.getCurrentContext().getFilterExecutionSummary().toString()
.contains("PreDecorationFilter[SUCCESS]");
}
#Override
public Object run() {
final RequestContext context = RequestContext.getCurrentContext();
if (context.get(FilterConstants.SERVICE_ID_KEY) == null || context.getRouteHost() != null) {
// not a Ribbon route
return null;
}
// get current ZuulRoute
final String proxy = (String) context.get(FilterConstants.PROXY_KEY);
final ZuulRoute zuulRoute = this.zuulProperties.getRoutes().get(proxy);
// patch URL by prefixing it with zuulRoute.url
final Object originalRequestPath = context.get(FilterConstants.REQUEST_URI_KEY);
final String modifiedRequestPath = zuulRoute.getUrl() + originalRequestPath;
context.put(FilterConstants.REQUEST_URI_KEY, modifiedRequestPath);
// patch serviceId because :
// - has been set to route.location in PreDecorationFilter
// - route.location has been set to zuulRoute.location in SimpleRouteLocator
// - zuulRoute.location return zuulRoute.url if set
context.set(FilterConstants.SERVICE_ID_KEY, zuulRoute.getServiceId());
return null;
}
}
Now calls to /route1 will be proxified to http://server1/path/to/prepend
This solution is also compatible with co-existing routes not using Ribbon.
Example of a co-existing route not using Ribbon :
zuul.routes.route2.path=/route2/**
zuul.routes.route2.url=http://server2/some/path
Calls to /route2 will be proxified to http://server2/some/path by SimpleHostRoutingFilter (if not disabled)
Here is a posted solution in the link by #Vikash
#Component
public class CustomPathZuulFilter extends ZuulFilter
{
#Override
public String filterType() {
return "pre";
}
#Override
public int filterOrder() {
return PreDecorationFilter.FILTER_ORDER + 1;
}
#Override
public boolean shouldFilter() {
return true;
}
#Override
public Object run() {
RequestContext context = RequestContext.getCurrentContext();
Object originalRequestPath = context.get(REQUEST_URI_KEY);
String modifiedRequestPath = "/api/microservicePath" + originalRequestPath;
context.put(REQUEST_URI_KEY, modifiedRequestPath);
return null;
}
}
Have you tried creating a preFilter or even a routeFilter ?
That way you can intercept the request, and change the routing.
See Zuul Filters

Resources