I'm trying to use MRunit to test my sortComparatorClass. It seems MRunit should be able to do this with the setKeyOrderComparator method, but when I run the mapReduceDriver it is not calling the compare() method of SortComparator class.
Pretty sure I'm doing something wrong with MRunit API.
Here's my code for the unit test:
public class UnitTests {
private static transient Log log = LogFactory.getLog(UnitTests.class);
MapReduceDriver<Text, Text, Text, Text, Text, Text> mapReduceDriver;
MapDriver<Text, Text, Text, Text> mapDriver;
ReduceDriver<Text, Text, Text, Text> reduceDriver;
#Before
public void setUp() throws InterruptedException, IOException {
mapDriver = new MapDriver<Text, Text, Text, Text>();
mapDriver.setMapper(new TestMapper());
reduceDriver = new ReduceDriver<Text, Text, Text, Text>();
reduceDriver.setReducer(new TestReducer());
mapReduceDriver = new MapReduceDriver(new TestMapper(), new TestReducer());
mapReduceDriver.setKeyOrderComparator(new TestSortCompartor());
}
#Test
public void testSort() throws IOException {
Text inputKey1 = new Text("def");
Text inputKey2 = new Text("abc");
Text inputValue = new Text("BlahBlahBlah");
mapReduceDriver.addInput(inputKey1, inputValue);
mapReduceDriver.addInput(inputKey2, inputValue);
List<Pair<Text, Text>> output = mapReduceDriver.run();
log.info("Got output of size "+output.size()+" with first pair = "+output.get(0).toString());
}
}
And here's my test sortComparator:
public class TestSortCompartor extends WritableComparator{
private static transient Log log = LogFactory.getLog(TestSortCompartor.class);
public TestSortCompartor() {
super(Text.class, true);
}
#SuppressWarnings("rawtypes")
#Override
public int compare(WritableComparable w1, WritableComparable w2) {
log.info("calling compare with key1 = "+w1.toString()+" and key2 "+w2.toString());
return w1.compareTo(w2) * -1;
}
}
When I run the test I get this output:
INFO 2014-01-13 09:34:27,362 [main] (com.gradientx.gxetl.testMR.UnitTests:53) -
Got output of size 2 with first pair = (abc, BlahBlahBlah)
But there's no output from the sortComparator class - and it's not ordering the keys in reverse, so I know my comparator() method is not being called.
Can anyone advise what am I doing wrong? Is it possible to use MRunit to test my own comparator class? Is there a better way to make a unit test for a custom comparator class?
FYI, here are the relevant dependencies from my Pom:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>2.0.0-mr1-cdh4.4.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.0.0-mr1-cdh4.4.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.mrunit</groupId>
<artifactId>mrunit</artifactId>
<version>1.0.0</version>
<classifier>hadoop2</classifier>
</dependency>
Related
Following is the scenario:
I create a reactor kafka receiver
Data consumed from kafka receiver is published to a WebSocketHanlder
WebSocketHanlder is mapped to a URL using SimpleUrlHandlerMapping
URL pattern is api/v1/ws/{ID} and I expect multiple WebSocketSession to get created based on different ID used in URI which are managed by single WebSocketHanlder, which is actually happening
But when data from kafka receiver is published, only first created WebSocketSession receiveds it and all other WebSocketSessions do not receive the data
I am using spring-boot 2.6.3 with starter-tomcat
How to publish data to all the WebSocketSessions created
My Code:
Config for web socket handler
#Configuration
#Slf4j
public class OneSecPollingWebSocketConfig
{
private OneSecPollingWebSocketHandler oneSecPollingHandler;
#Autowired
public OneSecPollingWebSocketConfig(OneSecPollingWebSocketHandler oneSecPollingHandler)
{
this.oneSecPollingHandler = oneSecPollingHandler;
}
#Bean
public HandlerMapping webSocketHandlerMapping()
{
log.info("onesecpolling websocket configured");
Map<String, WebSocketHandler> handlerMap = new HashMap<>();
handlerMap.put(WEB_SOCKET_ENDPOINT, oneSecPollingHandler);
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setUrlMap(handlerMap);
mapping.setOrder(1);
return mapping;
}
}
Code for WebSocket HAndler
#Component
#Slf4j
public class OneSecPollingWebSocketHandler implements WebSocketHandler
{
private ObjectMapper objectMapper;
private OneSecPollingKafkaConsumerService oneSecPollingKafkaConsumerService;
private Map<String, WebSocketSession> wsSessionsByUserSessionId = new HashMap<>();
#Autowired
public OneSecPollingWebSocketHandler(ObjectMapper objectMapper, OneSecPollingKafkaConsumerService oneSecPollingKafkaConsumerService)
{
this.objectMapper = objectMapper;
this.oneSecPollingKafkaConsumerService = oneSecPollingKafkaConsumerService;
}
#Override
public Mono<Void> handle(WebSocketSession webSocketSession)
{
Many<String> sink = Sinks.many().multicast().onBackpressureBuffer(Queues.SMALL_BUFFER_SIZE, false);
wsSessionsByUserSessionId.put(getUserPollingSessionId(webSocketSession), webSocketSession);
sinkSubscription(webSocketSession, sink);
Mono<Void> output = webSocketSession.send(sink.asFlux().map(webSocketSession::textMessage)).doOnSubscribe(subscription ->
{
});
return Mono.zip(webSocketSession.receive().then(), output).then();
}
public void sinkSubscription(WebSocketSession webSocketSession, Many<String> sink)
{
log.info("number of sessions; {}", wsSessionsByUserSessionId.size());
oneSecPollingKafkaConsumerService.getTestTopicFlux().doOnNext(record ->
{
//log.info("record: {}", record);
sink.tryEmitNext(record.value());
record.receiverOffset().acknowledge();
}).subscribe();
}
public String getOneSecPollingTopicRecord(ReceiverRecord<Integer, String> record, WebSocketSession webSocketSession)
{
String lastRecord = record.value();
log.info("record to send: {} : webSocketSession: {}", record.value(), webSocketSession.getId());
record.receiverOffset().acknowledge();
return lastRecord;
}
public String getUserPollingSessionId(WebSocketSession webSocketSession)
{
UriTemplate template = new UriTemplate(WEB_SOCKET_ENDPOINT);
URI uri = webSocketSession.getHandshakeInfo().getUri();
Map<String, String> parameters = template.match(uri.getPath());
String userPollingSessionId = parameters.get("userPollingSessionId");
return userPollingSessionId;
}
}
Kafka Receiver
#Service
#Slf4j
public class OneSecPollingKafkaConsumerService
{
private String bootStrapServers;
#Autowired
public OneSecPollingKafkaConsumerService(#Value("${bootstrap.servers}") String bootStrapServers)
{
this.bootStrapServers = bootStrapServers;
}
private ReceiverOptions<Integer, String> getRecceiverOPtions()
{
Map<String, Object> consumerProps = new HashMap<>();
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootStrapServers);
//consumerProps.put(ConsumerConfig.CLIENT_ID_CONFIG, "reactive-consumer");
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "onesecpolling-group");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
//consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
//consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
ReceiverOptions<Integer, String> receiverOptions = ReceiverOptions
.<Integer, String> create(consumerProps)
.subscription(Collections.singleton("HighFrequencyPollingKPIsComputedValues"));
return receiverOptions;
}
public Flux<ReceiverRecord<Integer, String>> getTestTopicFlux()
{
return createTopicCache();
}
private Flux<ReceiverRecord<Integer, String>> createTopicCache()
{
Flux<ReceiverRecord<Integer, String>> oneSecPollingMessagesFlux = KafkaReceiver.create(getRecceiverOPtions())
.receive()
.delayElements(Duration.ofMillis(500));
return oneSecPollingMessagesFlux;
}
}
POM dependencies
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
<!--
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>io.projectreactor.kafka</groupId>
<artifactId>reactor-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<!-- This is breaking WebFlux
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>${springdoc.version}</version>
</dependency>
-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
<classifier>test-binder</classifier>
<type>test-jar</type>
<scope>test</scope>
</dependency>
<!-- <dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-test</artifactId>
<scope>test</scope>
</dependency> -->
</dependencies>
I also tried changing the handle(...) method definition in WebSocketHanlder to following, but still data from kafka is pushed to only one websocket session:
#Override
public Mono<Void> handle(WebSocketSession webSocketSession)
{
Mono<Void> input = webSocketSession.receive().then();
Mono<Void> output = webSocketSession.send(oneSecPollingKafkaConsumerService.getTestTopicFlux().map(ReceiverRecord::value).map(webSocketSession::textMessage));
return Mono.zip(input, output).then();
}
Also, I tried following:
public Mono<Void> handle(WebSocketSession webSocketSession)
{
Mono<Void> input = webSocketSession.receive()
.doOnSubscribe(subscribe -> log.info("sesseion created sessionId:{}:userId:{};sessionhash:{}",
webSocketSession.getId(),
getUserPollingSessionId(webSocketSession),
webSocketSession.hashCode()))
.then();
Flux<String> source = oneSecPollingKafkaConsumerService.getTestTopicFlux().map(record -> getOneSecPollingTopicRecord(record, webSocketSession)).log();
Mono<Void> output = webSocketSession.send(source.map(webSocketSession::textMessage)).log();
return Mono.zip(input, output).then().log();
}
I enabled log() and got following output:
20:09:22.652 [http-nio-8080-exec-9] INFO c.m.e.w.p.i.w.v.OneSecPollingWebSocketHandler - sesseion created sessionId:a:userId:124;sessionhash:1974799413
20:09:22.652 [http-nio-8080-exec-9] INFO reactor.Flux.RefCount.41 - | onSubscribe([Fuseable] FluxRefCount.RefCountInner)
20:09:22.652 [http-nio-8080-exec-9] INFO reactor.Flux.Map.42 - onSubscribe(FluxMap.MapSubscriber)
20:09:22.652 [http-nio-8080-exec-9] INFO reactor.Flux.Map.42 - request(1)
20:09:22.652 [http-nio-8080-exec-9] INFO reactor.Flux.RefCount.41 - | request(32)
20:09:22.659 [http-nio-8080-exec-9] INFO reactor.Mono.FromPublisher.43 - onSubscribe(MonoNext.NextSubscriber)
20:09:22.659 [http-nio-8080-exec-9] INFO reactor.Mono.FromPublisher.43 - request(unbounded)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Mono.IgnorePublisher.48 - onSubscribe(MonoIgnoreElements.IgnoreElementsSubscriber)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Mono.IgnorePublisher.48 - request(unbounded)
20:09:25.942 [http-nio-8080-exec-10] INFO c.m.e.w.p.i.w.v.OneSecPollingWebSocketHandler - sesseion created sessionId:b:userId:123;sessionhash:1582184236
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Flux.RefCount.45 - | onSubscribe([Fuseable] FluxRefCount.RefCountInner)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Flux.Map.46 - onSubscribe(FluxMap.MapSubscriber)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Flux.Map.46 - request(1)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Flux.RefCount.45 - | request(32)
20:09:25.947 [http-nio-8080-exec-10] INFO reactor.Mono.FromPublisher.47 - onSubscribe(MonoNext.NextSubscriber)
20:09:25.949 [http-nio-8080-exec-10] INFO reactor.Mono.FromPublisher.47 - request(unbounded)
20:10:00.880 [reactive-kafka-onesecpolling-group-11] INFO reactor.Flux.RefCount.41 - | onNext(ConsumerRecord(topic = HighFrequencyPollingKPIsComputedValues, partition = 0, leaderEpoch = null, offset = 474, CreateTime = 1644071999871, serialized key size = -1, serialized value size = 43, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = {"greeting" : "Hello", "name" : "Prashant"}))
20:10:01.387 [parallel-5] INFO reactor.Flux.Map.42 - onNext({"greeting" : "Hello", "name" : "Prashant"})
20:10:01.389 [parallel-5] INFO reactor.Flux.Map.42 - request(1)
Here we can see that we have 2 subscribers to reactor-kafka flux:
reactor.Flux.Map.42 - onSubscribe(FluxMap.MapSubscriber
reactor.Flux.Map.46 - onSubscribe(FluxMap.MapSubscriber)
but when data is read from kafka topic, it is received by only one subscriber:
reactor.Flux.Map.42 - onNext({"greeting" : "Hello", "name" :
"Prashant"})
Is it a bug in the Webflux API itself ?
I have found the issue and the solution.
Problem
The way I was using Flux (obtained from KafkaReceiver) in WebSocketHandler handle() method is not correct.
For each websocket session created from multiple client requests, handle method is get called. And so, multiple Flux objects for KafkaReceiver.create().receive() are created. One of the Flux reads data from KafkaReceiver but other flux objects failed to do so.
public Mono<Void> handle(WebSocketSession webSocketSession)
{
Mono<Void> input = webSocketSession.receive()
.doOnSubscribe(subscribe -> log.info("sesseion created sessionId:{}:userId:{};sessionhash:{}",
webSocketSession.getId(),
getUserPollingSessionId(webSocketSession),
webSocketSession.hashCode()))
.then();
**Flux<String> source = oneSecPollingKafkaConsumerService.getTestTopicFlux()**.map(record -> getOneSecPollingTopicRecord(record, webSocketSession)).log();
Mono<Void> output = webSocketSession.send(source.map(webSocketSession::textMessage)).log();
return Mono.zip(input, output).then().log();
}
Solution
Make sure that only one Flux is created for
KafkaReceiver.create().receive(). One way to do so is to make Flux in the constructor of WebSocketHandler (or KAfkaCOnsumer class)
private final Flux<String> source;
#Autowired
public OneSecPollingWebSocketHandler(OneSecPollingKafkaConsumerService oneSecPollingKafkaConsumerService)
{
source = oneSecPollingKafkaConsumerService.getOneSecPollingTopicFlux().map(r -> getOneSecPollingTopicRecord(r));
}
#Override
public Mono<Void> handle(WebSocketSession webSocketSession)
{
// add usersession id as session attribute
Mono<Void> input = getInputMessageMono(webSocketSession);
Mono<Void> output = getOutputMessageMono(webSocketSession);
return Mono.zip(input, output).then().log();
}
private Mono<Void> getOutputMessageMono(WebSocketSession webSocketSession)
{
Mono<Void> output = webSocketSession.send(source.map(webSocketSession::textMessage)).doOnError(err -> log.error(err.getMessage())).doOnTerminate(() ->
{
log.info("onesecpolling session terminated;{}", webSocketSession.getId());
}).log();
return output;
}
private Mono<Void> getInputMessageMono(WebSocketSession webSocketSession)
{
Mono<Void> input = webSocketSession.receive().doOnSubscribe(subscribe ->
{
log.info("onesecpolling session created sessionId:{}:userId:{}", webSocketSession.getId(), getUserPollingSessionId(webSocketSession));
}).then();
return input;
}
private String getOneSecPollingTopicRecord(ReceiverRecord<Integer, String> record)
{
String lastRecord = record.value();
record.receiverOffset().acknowledge();
return lastRecord;
}
private String getUserPollingSessionId(WebSocketSession webSocketSession)
{
UriTemplate template = new UriTemplate(WEB_SOCKET_ENDPOINT);
URI uri = webSocketSession.getHandshakeInfo().getUri();
Map<String, String> parameters = template.match(uri.getPath());
String userPollingSessionId = parameters.get(WEB_SOCKET_ENDPOINT_USERID_SUBPATH);
return userPollingSessionId;
}
Based on this answer and the comments I implemented the code to receive the scores of an elastic search query.
public class CustomizedHotelRepositoryImpl implements CustomizedHotelRepository {
private final ElasticsearchTemplate elasticsearchTemplate;
#Autowired
public CustomizedHotelRepositoryImpl(ElasticsearchTemplate elasticsearchTemplate) {
super();
this.elasticsearchTemplate = elasticsearchTemplate;
}
#Override
public Page<Hotel> findHotelsAndScoreByName(String name) {
QueryBuilder queryBuilder = QueryBuilders.boolQuery()
.should(QueryBuilders.queryStringQuery(name).lenient(true).defaultOperator(Operator.OR).field("name"));
NativeSearchQuery nativeSearchQuery = new NativeSearchQueryBuilder().withQuery(queryBuilder)
.withPageable(PageRequest.of(0, 100)).build();
DefaultEntityMapper mapper = new DefaultEntityMapper();
ResultsExtractor<Page<Hotel>> rs = new ResultsExtractor<Page<Hotel>>() {
#Override
public Page<Hotel> extract(SearchResponse response) {
ArrayList<Hotel> hotels = new ArrayList<>();
SearchHit[] hits = response.getHits().getHits();
for (SearchHit hit : hits) {
try {
Hotel hotel = mapper.mapToObject(hit.getSourceAsString(), Hotel.class);
hotel.setScore(hit.getScore());
hotels.add(hotel);
} catch (IOException e) {
e.printStackTrace();
}
}
return new PageImpl<>(hotels, PageRequest.of(0, 100), response.getHits().getTotalHits());
}
};
return elasticsearchTemplate.query(nativeSearchQuery, rs);
}
}
As you can see I needed to create a new instance of DefaultEntityMapper mapper = new DefaultEntityMapper(); which should not be the case because it should be possible to #Autowire EntityMapper. If I do so, I get the exception that there is no bean.
Description:
Field entityMapper in com.example.elasticsearch5.es.cluster.repository.impl.CustomizedCluserRepositoryImpl required a bean of type 'org.springframework.data.elasticsearch.core.EntityMapper' that could not be found.
Action:
Consider defining a bean of type 'org.springframework.data.elasticsearch.core.EntityMapper' in your configuration.
So does anybody know if its possible to autowire EntityMapper directly or does it needs to create the bean manually using #Bean annotation.
I use spring-data-elasticsearch-3.0.2.RELEASE.jar where the core package is inside.
My pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-elasticsearch</artifactId>
</dependency>
I checked out the source code of spring-data-elasticsearch. There is no bean/comoponent definition for EntityMapper. It seems this answer is wrong. I test it on my project and get the same error.
Consider defining a bean of type 'org.springframework.data.elasticsearch.core.EntityMapper' in your configuration.
I couldn't find any other option by except defining a #Bean
Can someone tell me what is the problem with below implementation. I'm trying to delete the entire cache, secondly, I then want to pre-populate/prime the cache. However, what I've below is only deleting both caches, but not pre-populating/priming the cache, when the two methods are executed. Any idea?
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.cache.annotation.Caching;
#Cacheable(cacheNames = "cacheOne")
List<User> cacheOne() throws Exception {...}
#Cacheable(cacheNames = "cacheOne")
List<Book> cacheTwo() throws Exception {...}
#Caching (
evict = {
#CacheEvict(cacheNames = "cacheOne", allEntries = true),
#CacheEvict(cacheNames = "CacheTwo", allEntries = true)
}
)
void clearAndReloadEntireCache() throws Exception
{
// Trying to reload cacheOne and cacheTwo inside this method
// Is this even possible? if not what is the correct approach?
cacheOne();
cacheTwo();
}
I've spring boot application (v1.4.0), more importantly, utilizing the following dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>org.ehcache</groupId>
<artifactId>ehcache</artifactId>
<version>3.3.0</version>
</dependency>
<dependency>
<groupId>javax.cache</groupId>
<artifactId>cache-api</artifactId>
<version>1.0.0</version>
</dependency>
If you call the clearAndReloadEntireCache() method, only this method will be processed by the caching interceptor. Calling other methods of the same object: cacheOne() and cacheTwo() will not cause cache interception at runtime, although both of them are annotated with #Cacheable.
You could achieve desired functionality by reloading cacheOne and cacheTwo with two method calls shown below:
#Caching(evict = {#CacheEvict(cacheNames = "cacheOne", allEntries = true, beforeInvocation = true)},
cacheable = {#Cacheable(cacheNames = "cacheOne")})
public List<User> cleanAndReloadCacheOne() {
return cacheOne();
}
#Caching(evict = {#CacheEvict(cacheNames = "cacheTwo", allEntries = true, beforeInvocation = true)},
cacheable = {#Cacheable(cacheNames = "cacheTwo")})
public List<Book> cleanAndReloadCacheTwo() {
return cacheTwo();
}
I want to be able to use a test properties files and only override a few properties. Having to override every single property will get ugly fast.
This is the code I am using to test my ability to mock properties and use existing properties in a test case
#RunWith(SpringRunner.class)
#SpringBootTest(classes = MyApp.class)
#TestPropertySource(
locations = { "classpath:myapp-test.properties" },
properties = { "test.key = testValue" })
public class EnvironmentMockedPropertiesTest {
#Autowired private Environment env;
// #MockBean private Environment env;
#Test public void testExistingProperty() {
// some.property=someValue
final String keyActual = "some.property";
final String expected = "someValue";
final String actual = env.getProperty(keyActual);
assertEquals(expected, actual);
}
#Test public void testMockedProperty() {
final String keyMocked = "mocked.test.key";
final String expected = "mockedTestValue";
when(env.getProperty(keyMocked)).thenReturn(expected);
final String actual = env.getProperty(keyMocked);
assertEquals(expected, actual);
}
#Test public void testOverriddenProperty() {
final String expected = "testValue";
final String actual = env.getProperty("test.key");
assertEquals(expected, actual);
}
}
What I find is:
#Autowired private Environment env;
testExistingProperty() and testOverriddenProperty() pass
testMockedProperty() fails
#MockBean private Environment env;
testMockedProperty() passes
testExistingProperty() and testOverriddenProperty() fail
Is there a way to achieve what I am aiming for?
Dependencies:
<spring.boot.version>1.4.3.RELEASE</spring.boot.version>
...
<!-- Spring -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot</artifactId>
<version>${spring.boot.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure</artifactId>
<version>${spring.boot.version}</version>
</dependency>
<!-- Starter for testing Spring Boot applications with libraries including JUnit,
Hamcrest and Mockito -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<version>${spring.boot.version}</version>
</dependency>
Ok i have made this work, you need to use Mockito to accompish what you are looking for:
Maven Dependency
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>2.6.4</version>
</dependency>
Test Class Set up
import static org.mockito.Mockito.*;
import static org.springframework.test.util.AopTestUtils.getTargetObject;
#RunWith(SpringRunner.class)
#SpringBootTest(classes = MyApp.class)
#TestPropertySource(
locations = { "classpath:myapp-test.properties" },
properties = { "test.key = testValue" })
public class AnswerTest {
// This will be only for injecting, we will not be using this object in tests.
#Autowired
private Environment env;
// This is the reference that will be used in tests.
private Environment envSpied;
// Map of properties that you intend to mock
private Map<String, String> mockedProperties;
#PostConstruct
public void postConstruct(){
mockedProperties = new HashMap<String, String>();
mockedProperties.put("mocked.test.key_1", "mocked.test.value_1");
mockedProperties.put("mocked.test.key_2", "mocked.test.value_2");
mockedProperties.put("mocked.test.key_3", "mocked.test.value_3");
// We use the Spy feature of mockito which enabled partial mocking
envSpied = Mockito.spy((Environment) getTargetObject(env));
// We mock certain retrieval of certain properties
// based on the logic contained in the implementation of Answer class
doAnswer(new CustomAnswer()).when(envSpied).getProperty(Mockito.anyString());
}
Test case
// Testing for both mocked and real properties in same test method
#Test public void shouldReturnAdequateProperty() {
String mockedValue = envSpied.getProperty("mocked.test.key_3");
String realValue = envSpied.getProperty("test.key");
assertEquals(mockedValue, "mocked.test.value_3");
assertEquals(realValue, "testValue");
}
Implementation of Mockito's Answer interface
// Here we define what should mockito do:
// a) return mocked property if the key is a mock
// b) invoke real method on Environment otherwise
private class CustomAnswer implements Answer<String>{
#Override
public String answer(InvocationOnMock invocationOnMock) throws Throwable {
Object[] arguments = invocationOnMock.getArguments();
String parameterKey = (String) arguments[0];
String mockedValue = mockedProperties.get(parameterKey);
if(mockedValue != null){
return mockedValue;
}
return (String) invocationOnMock.callRealMethod();
}
}
}
Try it out, and let me know if all is clear here.
I cannot get WebDriverManager to work. I would like to use PhantomJSDriver without having to set a system property like this:
System.setProperty("phantomjs.binary.path", "E:/phantomjs-2.1.1-windows/bin/phantomjs.exe");
I have these dependencies in my pom.xml:
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-server</artifactId>
<version>3.0.1</version>
</dependency>
<dependency>
<groupId>io.github.bonigarcia</groupId>
<artifactId>webdrivermanager</artifactId>
<version>1.5.1</version>
</dependency>
This is my code/test:
import static org.junit.Assert.assertEquals;
import org.junit.Test;
public class TestA {
WebDriver driver;
#BeforeClass
public static void setupClass() {
PhantomJsDriverManager.getInstance().setup();
}
#Before
public void setUp() {
driver = new PhantomJSDriver();
}
#Test
public void test() {
driver.get("https://www.google.de/");
System.out.println(driver.getTitle());
assertEquals("Google", driver.getTitle());
}
}
The test fails:
org.junit.ComparisonFailure: expected:<[Google]> but was:<[]>
Does anybody know what I am doing wrong? Thanks in advance!
UPDATE: Now I have another problem. Before using the webdrivermanager I had this:
DesiredCapabilities dc = DesiredCapabilities.phantomjs();
dc.setJavascriptEnabled(true);
dc.setCapability(PhantomJSDriverService.PHANTOMJS_CLI_ARGS,
new String[] { "--web-security=no", "--ignore-ssl-errors=yes" });
System.setProperty("phantomjs.binary.path", "E:/phantomjs-2.1.1-windows/bin/phantomjs.exe");
WebDriver driver = new PhantomJSDriver(dc);
Now, when I delete the line with System.setProperty(...), it is not working anymore. Thanks for helping.
Looks like your making the assertion to early, so the page is not loaded when you call getTitle() on it. What does your println print out?
Try adding a wait to to your test, if you know the page title should be "Google" then why not wait for that to be true before doing any further assertions? When the page title is equal to what your expecting you can be reasonably confident the page is loaded. Try this:
public Boolean waitForPageIsLoaded(String title) {
return new WebDriverWait(driver, 10).until(ExpectedConditions.titleIs(title));
}