How to use DeleteByQuery plugin with embedded ES 2.3.3 - elasticsearch

I have run ES 2.3.3 in an embedded fashion but I'm unable to invoke the DeleteByQuery action due to the described exception. I added the DeleteByQuery plugin to my classpath and also set the plugin.types settings for my not but it is still not working.
My Maven dependencies:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>2.3.3</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>delete-by-query</artifactId>
<version>2.3.3</version>
</dependency>
My ES Setup:
Settings elasticsearchSettings = Settings.settingsBuilder()
.put("threadpool.index.queue_size", -1)
.put("path.home", options.getDirectory())
.put("plugin.types", DeleteByQueryPlugin.class.getName())
.build();
NodeBuilder builder = NodeBuilder.nodeBuilder();
node = builder.local(true).settings(elasticsearchSettings).node();
Invocation of the action which is used to truncate the index.
DeleteByQueryRequestBuilder builder = new DeleteByQueryRequestBuilder(node.client(), DeleteByQueryAction.INSTANCE);
builder.setIndices(indexName).setQuery(QueryBuilders.matchAllQuery()).execute().addListener(new ActionListener<DeleteByQueryResponse>() {
public void onResponse(DeleteByQueryResponse response) {
if (log.isDebugEnabled()) {
log.debug("Deleted index {" + indexName + "}. Duration " + (System.currentTimeMillis() - start) + "[ms]");
}
sub.onCompleted();
};
#Override
public void onFailure(Throwable e) {
log.error("Deleting index {" + indexName + "} failed. Duration " + (System.currentTimeMillis() - start) + "[ms]", e);
sub.onError(e);
}
});
Exception that I'm seeing:
Caused by: java.lang.IllegalStateException: failed to find action [org.elasticsearch.action.deletebyquery.DeleteByQueryAction#7c1ed3a2] to execute
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:56) ~[elasticsearch-2.3.3.jar:2.3.3]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) ~[elasticsearch-2.3.3.jar:2.3.3]

I noticed that the Node builder invokes the node constructor with an empty plugin list. I extended the Node class in order to invoke this (protected) constructor.
public class ESNode extends Node {
protected ESNode(Settings settings, Collection<Class<? extends Plugin>> plugins) {
super(InternalSettingsPreparer.prepareEnvironment(settings, null), Version.CURRENT, plugins);
}
}
Using the ESNode all the needed plugin was loaded.
Set<Class<? extends Plugin>> classpathPlugins = new HashSet<>();
classpathPlugins.add(DeleteByQueryPlugin.class);
node = new ESNode(settings, classpathPlugins).start();
This may not be ideal but so far it is working just fine.

Related

How to increase file size upload limit in spring boot using embedded tomcat

I am try to upload the file using my spring boot API. The function is working fine when I am using small file (less than 1 MB), but when I upload large file it gives me an exception. I am using embedded Tomcat server.
Maximum upload size exceeded;
nested exception is java.lang.IllegalStateException: org.apache.tomcat.util.http.fileupload.impl.FileSizeLimitExceededException: The field file exceeds its maximum permitted size of 1048576 bytes.
I have tried the following code in my files but every time I am getting the error
1. application.property
server.tomcat.max-swallow-size=100MB
server.tomcat.max-http-post-size=100MB
spring.servlet.multipart.enabled=true
spring.servlet.multipart.fileSizeThreshold=100MB
spring.servlet.multipart.max-file-size=100MB
spring.servlet.multipart.max-request-size=100MB
I have also tried
spring.servlet.multipart.maxFileSize=100MB
spring.servlet.multipart.maxRequestSize=100MB
2. The belove is my file uploading code
public RestDTO uploadFile(MultipartFile file, String subPath) {
if (file.isEmpty()) {
return new RestFailure("Failed to store empty file");
}
try {
String fileName = new Date().getTime() + "_" + file.getOriginalFilename();
String filePath = uploadPath + subPath + fileName;
if (Objects.equals(file.getOriginalFilename(), "blob")) {
filePath += ".png";
fileName += ".png";
}
File uploadDir = new File(uploadPath + subPath);
if (!uploadDir.exists()) {
uploadDir.mkdirs();
}
FileOutputStream output = new FileOutputStream(filePath);
output.write(file.getBytes());
LOGGER.info("File path : " + filePath);
MediaInfoDTO mediaInfoDTO = getThumbnailFromVideo(subPath, fileName);
String convertedFileName = convertVideoToMP4(subPath, fileName);
System.out.println("---------------->" + convertedFileName);
return new RestData<>(new MediaDetailDTO(mediaInfoDTO.getMediaPath(), convertedFileName,
mediaInfoDTO.getMediaType(), mediaInfoDTO.getMediaCodec(), mediaInfoDTO.getWidth(),
mediaInfoDTO.getHeight(), mediaInfoDTO.getDuration()));
} catch (IOException e) {
LOGGER.info("Can't upload file: " + e.getMessage());
return new RestFailure("Failed to store empty file");
}
}
but every time I got the same exception.
Apart from comment might I suggest creating a #Bean for Factory MultipartConfigurationElement
This basically should override other restrictions if you have any from TomCat side.
#Bean
public MultipartConfigElement multipartConfigElement() {
MultipartConfigFactory factory = new MultipartConfigFactory();
factory.setMaxFileSize(DataSize.ofBytes(100000000L));
factory.setMaxRequestSize(DataSize.ofBytes(100000000L));
return factory.createMultipartConfig();
}
Here DataSize is of type org.springframework.util.unit.DataSize
Reference https://github.com/spring-projects/spring-boot/issues/11284
Another issue I suspect could be from TomCat maxSwallowSize see Baeldung's point #5 if above does not work.
https://www.baeldung.com/spring-maxuploadsizeexceeded
After reviewing many examples and after several tests with no results. I have managed to solve the problem with the following configuration:
Add to pom the follows dependencies:
<dependency>
<groupId>commons-fileupload</groupId>
<artifactId>commons-fileupload</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.6</version>
</dependency>
Remove from yml:
sprint:
servlet:
multipart:
enabled: true
file-size-threshold: 2KB
max-file-size: 10MB
max-request-size: 10MB
Add to yml:
server:
tomcat:
max-swallow-size: -1
max-http-form-post-size: -1
And last but not least:
#Bean
public MultipartResolver multipartResolver() {
CommonsMultipartResolver resolver
= new CommonsMultipartResolver();
resolver.setDefaultEncoding(StandardCharsets.UTF_8.displayName());
resolver.setMaxUploadSize(52428800L); //50MB
resolver.setMaxUploadSizePerFile(52428800L); //50MB
return resolver;
}
#ExceptionHandler(MaxUploadSizeExceededException.class)
public ResponseEntity<Object> handleFileUploadError(MaxUploadSizeExceededException ex) {
return ResponseEntity.status(EXPECTATION_FAILED).body(
CustomResponse.builder()
.status(Status.ERROR)
.message(ex.getMessage())
.build());
}
// Where CustomResponse class is in my case:
/**
* The UploadResponse class
* <p>
* Contain the response body
*/
#Getter
#Builder(toBuilder = true)
#AllArgsConstructor
#JsonInclude(JsonInclude.Include.NON_NULL)
public class CustomResponse {
/**
* The status
*/
private final Status status;
/**
* The message
*/
private final String message;
/**
* The errors
*/
private final Set<String> errors;
}

flink elasticsearch connector

I used the following code to connect Flink to ElasticSearch. But when running with Flink, a lot of errors are displayed.The program first enters the data from a port and then reads each line in the command line according to the program written. It then displays the number of words. The main problem is when connecting to a elasticsearch that unfortunately gives error when connecting. Are these errors? What classes do you need to connect Minimal Flink to Elastic Search?
public class Elastic {
public static void main(String[] args) throws Exception {
// the port to connect to
final int port;
try {
final ParameterTool params = ParameterTool.fromArgs(args);
port = params.getInt("port");
} catch (Exception e) {
System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'");
return;
}
// get the execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// get input data by connecting to the socket
DataStream<String> text = env.socketTextStream("localhost", port, "\n");
// parse the data, group it, window it, and aggregate the counts
DataStream<WordWithCount> windowCounts = text
.flatMap(new FlatMapFunction<String, WordWithCount>() {
#Override
public void flatMap(String value, Collector<WordWithCount> out) {
for (String word : value.split("\\s")) {
out.collect(new WordWithCount(word, 1L));
}
}
})
.keyBy("word")
.timeWindow(Time.seconds(5), Time.seconds(1))
.reduce(new ReduceFunction<WordWithCount>() {
#Override
public WordWithCount reduce(WordWithCount a, WordWithCount b) {
return new WordWithCount(a.word, a.count + b.count);
}
});
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1);
text.print().setParallelism(1);
env.execute("Socket Window WordCount");
List<HttpHost> httpHosts = new ArrayList<HttpHost>();
httpHosts.add(new HttpHost("127.0.0.1", 9200, "http"));
httpHosts.add(new HttpHost("10.2.3.1", 9200, "http"));
httpHosts.add(new HttpHost("my-ip",9200,"http"));
ElasticsearchSink.Builder<String> esSinkBuilder = new ElasticsearchSink.Builder<String>(
httpHosts,
new ElasticsearchSinkFunction<String>() {
public IndexRequest createIndexRequest(String element) {
Map<String, String> json = new HashMap<String, String>();
json.put("data", element);
return Requests.indexRequest()
.index("iran")
.type("int")
.source(json);
}
#Override
public void process(String element, RuntimeContext ctx, RequestIndexer indexer) {
indexer.add(createIndexRequest(element));
}
}
);
esSinkBuilder.setBulkFlushMaxActions(1);
final Header[] defaultHeaders = new Header[]{new BasicHeader("header", "value")};
esSinkBuilder.setRestClientFactory(new RestClientFactory() {
#Override
public void configureRestClientBuilder(RestClientBuilder restClientBuilder) {
restClientBuilder.setDefaultHeaders(defaultHeaders)
.setMaxRetryTimeoutMillis(10000)
.setPathPrefix("a")
.setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() {
#Override
public RequestConfig.Builder customizeRequestConfig(RequestConfig.Builder builder) {
return builder.setSocketTimeout(10000);
}
});
}
});
text.addSink(esSinkBuilder.build());
}
// Data type for words with count
public static class WordWithCount {
public String word;
public long count;
public WordWithCount() {
}
public WordWithCount(String word, long count) {
this.word = word;
this.count = count;
}
#Override
public String toString() {
return word + " : " + count;
}
}
}
my elasticsearch version: 7.5.0
my flink version: 1.8.3
my error:
sudo /etc/flink-1.8.3/bin/flink run -c org.apache.flink.Elastic /root/FlinkElastic-1.0.jar --port 9000
------------------------------------------------------------
The program finished with the following exception:
java.lang.RuntimeException: Could not look up the main(String[]) method from the class
org.apache.flink.Elastic:
org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkFunction
at org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:527)
at org.apache.flink.client.program.PackagedProgram.<init>(PackagedProgram.java:246)
... 7 more
Caused by: java.lang.NoClassDefFoundError:
org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkFunction
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:521)
... 7 more
Caused by: java.lang.ClassNotFoundException:
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkFunction
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 13 more
my pom:
<groupId>org.apache.flink</groupId>
<artifactId>FlinkElastic</artifactId>
<version>1.0</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.6.1</version>
<configuration>
<source>6</source>
<target>6</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-elasticsearch6_2.11</artifactId>
<version>1.8.3</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.8.3</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.8.3</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>1.8.3</version>
</dependency>
</dependencies>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
Please find the Flink Elastic Connector code here. I have used the following dependencies and versions mentioned below.
Flink: 1.10.0
ElasticSearch: 7.6.2
flink-connector-elasticsearch7
Scala: 2.12.11
SBT: 1.2.8
Java: 11.0.4
Point to be noted here:
Since ElasticSearch 6.x onwards they started full support of the REST elastic client. And till Elastic5.x they were using Transport elastic client.
1. Flink DataStream
val inputStream: DataStream[(String, String)] = ...
ESSinkService.sinkToES(inputStream, index)
2. ElastiSearchSink Function
package demo.elastic
import org.apache.flink.streaming.api.scala._
import org.apache.log4j._
import org.apache.flink.api.common.functions.RuntimeContext
import org.apache.flink.streaming.connectors.elasticsearch7.{ElasticsearchSink, RestClientFactory}
import org.apache.flink.streaming.connectors.elasticsearch.{ActionRequestFailureHandler, ElasticsearchSinkFunction, RequestIndexer}
import org.apache.http.HttpHost
import org.elasticsearch.client.{Requests, RestClientBuilder}
import org.elasticsearch.common.xcontent.XContentType
import org.elasticsearch.action.ActionRequest
import org.apache.flink.streaming.api.datastream.DataStreamSink
class ESSinkService {
val logger = Logger.getLogger(getClass.getName)
val httpHosts = new java.util.ArrayList[HttpHost]
httpHosts.add(new HttpHost("localhost", 9200, "http"))
httpHosts.add(new HttpHost("localhost", 9200, "http"))
def sinkToES(counted: DataStream[(String, String)], index: String): DataStreamSink[(String, String)] = {
val esSinkBuilder = new ElasticsearchSink.Builder[(String, String)](
httpHosts, new ElasticsearchSinkFunction[(String, String)] {
def process(element: (String, String), ctx: RuntimeContext, indexer: RequestIndexer) {
indexer.add(Requests.indexRequest
.index(element._2 + "_" + index)
.source(element._1, XContentType.JSON))
}
}
)
esSinkBuilder.setBulkFlushMaxActions(2)
esSinkBuilder.setBulkFlushInterval(1000L)
esSinkBuilder.setFailureHandler(new ActionRequestFailureHandler {
override def onFailure(actionRequest: ActionRequest, throwable: Throwable, i: Int, requestIndexer: RequestIndexer): Unit = {
println("#######On failure from ElasticsearchSink:-->" + throwable.getMessage)
}
})
esSinkBuilder.setRestClientFactory(new RestClientFactory {
override def configureRestClientBuilder(restClientBuilder: RestClientBuilder): Unit = {
/*restClientBuilder.setDefaultHeaders(...)
restClientBuilder.setMaxRetryTimeoutMillis(...)
restClientBuilder.setPathPrefix(...)
restClientBuilder.setHttpClientConfigCallback(...)*/
}
})
counted.addSink(esSinkBuilder.build())
}
}
object ESSinkService extends ESSinkService
Note: For more details click here.
A couple of things:
Flink doesn't yet support Elasticsearch 7. An ES7 connector will be released along with Flink 1.10.
You must include the flink/elasticsearch dependency in your project -- this error suggests you haven't included it:
ClassNotFoundException:
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkFunction
See the elasticsearch docs for more info.
Your Flink application code runs in the task managers. Each task manager must be able to find all of your application's dependencies in its CLASSPATH. The connector classes are not included out-of-the-box, so you will need to either build an uber jar (i.e., a fat jar, or jar with dependencies), or copy the flink-connector-elasticsearch6_2.11 jar file into the lib directory of every machine in the cluster. See the docs on connector dependencies for more details.

Spring boot webflux TEXT_EVENT_STREAM_VALUE is not working

I'm using spring-boot with the dependency spring-boot-starter-webflux,
i want to get one data per second with browser,
when i use spring-boot version 2.1.4,the code is working,
but 2.1.5 or greater version not working,
i will get all of the data at 10 sesonds later,not one data per second
I want to get the reason,or others i should do
I find spring-boot update the dependency of netty in 2.1.5,
so if i add the dependency in my pom.xml with
<dependency>
<groupId>io.projectreactor.netty</groupId>
<artifactId>reactor-netty</artifactId>
<version>0.8.8.RELEASE</version>
</dependency>
it working
#RestController
#RequestMapping("/demo")
public class DemoController {
// just get a string per second
#GetMapping(value = "",produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> getMsg(){
return Flux.fromStream(new Random().ints(10).mapToObj(intStream ->
{
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
return "this is data "+intStream;
}));
}
}
I believe this will achieve what it is you are going for.
#GetMapping(value = "",produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> getMsg(){
return Flux.fromStream(new Random()
.ints(10)
.mapToObj(value -> "this is data " + value))
.delayElements(Duration.ofSeconds(1));
}

ClassNotFoundException OptionsStrategy

When i connect to remote server and try to modify a graph i get java.lang.ClassNotFoundException: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/tinkerpop/gremlin/process/traversal/strategy/decoration/OptionsStrategy
I try search information about this exception. I think it happens because something conflict with version of janusgraph-core, gremlin-server and gremlin-driver
//pom file dependencies
<dependencies>
<dependency>
<groupId>org.janusgraph</groupId>
<artifactId>janusgraph-core</artifactId>
<version>0.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.tinkerpop</groupId>
<artifactId>gremlin-driver</artifactId>
<version>3.4.2</version>
</dependency>
<dependency>
<groupId>org.apache.tinkerpop</groupId>
<artifactId>gremlin-server</artifactId>
<version>3.4.2</version>
</dependency>
</dependencies>
//jgex-remote.properties file
gremlin.remote.remoteConnectionClass=org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection
gremlin.remote.driver.sourceName=g
gremlin.remote.driver.clusterFile=.../janus_connect_config.yaml
//janus_connect_config.yaml file
hosts: [xxx.xxx.xxx.xxx]
port: xxxx
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0,
config: {
ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry]
}
}
// java code
public class App {
public static void main(String[] args) throws ConfigurationException {
if (args.length == 0) {
throw new IllegalArgumentException("Input args must contains path to file with configuration");
}
String configFilePath = args[0];
PropertiesConfiguration connectConfig = new PropertiesConfiguration(configFilePath);
Cluster cluster = null;
Client client = null;
try {
cluster = Cluster.open(connectConfig.getString("gremlin.remote.driver.clusterFile"));
client = cluster.connect();
Bindings b = Bindings.instance();
GraphTraversalSource graph = EmptyGraph.instance()
.traversal()
.withRemote(connectConfig);
Vertex evnyh = graph.addV(b.of("label", "man"))
.property("name", "Evnyh")
.property("family", "Evnyhovich")
.next();
Vertex lalka = graph.addV(b.of("label", "man"))
.property("name", "Lalka")
.property("family", "Lalkovich")
.next();
graph.V(b.of("outV", evnyh)).as("a")
.V(b.of("inV", lalka)).as("b")
.addE(b.of("label", "friend")).from("a")
.next();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (client != null) {
try {
client.close();
} catch (Exception e) {
// nothing to do, just close client
}
}
if (cluster != null) {
try {
cluster.close();
} catch (Exception e) {
// nothing to do, just close cluster
}
}
}
}
}
Can somebody help resolve this problem?
You have a version mismatch. Note that JanusGraph 0.3.1 is bound to the TinkerPop 3.3.x line of code:
https://github.com/JanusGraph/janusgraph/blob/v0.3.1/pom.xml#L72
and OptionStrategy (and related functionality) was not added in TinkerPop until the 3.4.x line of code. JanusGraph therefore cannot process requests which use that sort of functionality.

Spring-Boot Elasticseach EntityMapper can not be autowired

Based on this answer and the comments I implemented the code to receive the scores of an elastic search query.
public class CustomizedHotelRepositoryImpl implements CustomizedHotelRepository {
private final ElasticsearchTemplate elasticsearchTemplate;
#Autowired
public CustomizedHotelRepositoryImpl(ElasticsearchTemplate elasticsearchTemplate) {
super();
this.elasticsearchTemplate = elasticsearchTemplate;
}
#Override
public Page<Hotel> findHotelsAndScoreByName(String name) {
QueryBuilder queryBuilder = QueryBuilders.boolQuery()
.should(QueryBuilders.queryStringQuery(name).lenient(true).defaultOperator(Operator.OR).field("name"));
NativeSearchQuery nativeSearchQuery = new NativeSearchQueryBuilder().withQuery(queryBuilder)
.withPageable(PageRequest.of(0, 100)).build();
DefaultEntityMapper mapper = new DefaultEntityMapper();
ResultsExtractor<Page<Hotel>> rs = new ResultsExtractor<Page<Hotel>>() {
#Override
public Page<Hotel> extract(SearchResponse response) {
ArrayList<Hotel> hotels = new ArrayList<>();
SearchHit[] hits = response.getHits().getHits();
for (SearchHit hit : hits) {
try {
Hotel hotel = mapper.mapToObject(hit.getSourceAsString(), Hotel.class);
hotel.setScore(hit.getScore());
hotels.add(hotel);
} catch (IOException e) {
e.printStackTrace();
}
}
return new PageImpl<>(hotels, PageRequest.of(0, 100), response.getHits().getTotalHits());
}
};
return elasticsearchTemplate.query(nativeSearchQuery, rs);
}
}
As you can see I needed to create a new instance of DefaultEntityMapper mapper = new DefaultEntityMapper(); which should not be the case because it should be possible to #Autowire EntityMapper. If I do so, I get the exception that there is no bean.
Description:
Field entityMapper in com.example.elasticsearch5.es.cluster.repository.impl.CustomizedCluserRepositoryImpl required a bean of type 'org.springframework.data.elasticsearch.core.EntityMapper' that could not be found.
Action:
Consider defining a bean of type 'org.springframework.data.elasticsearch.core.EntityMapper' in your configuration.
So does anybody know if its possible to autowire EntityMapper directly or does it needs to create the bean manually using #Bean annotation.
I use spring-data-elasticsearch-3.0.2.RELEASE.jar where the core package is inside.
My pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-elasticsearch</artifactId>
</dependency>
I checked out the source code of spring-data-elasticsearch. There is no bean/comoponent definition for EntityMapper. It seems this answer is wrong. I test it on my project and get the same error.
Consider defining a bean of type 'org.springframework.data.elasticsearch.core.EntityMapper' in your configuration.
I couldn't find any other option by except defining a #Bean

Resources