How to use ES Java API to create a new type of an index - elasticsearch

I have succeed create an index use Client , the code like this :
public static boolean addIndex(Client client,String index) throws Exception {
if(client == null){
client = getSettingClient();
}
CreateIndexRequestBuilder requestBuilder = client.admin().indices().prepareCreate(index);
CreateIndexResponse response = requestBuilder.execute().actionGet();
return response.isAcknowledged();
//client.close();
}
public static boolean addIndexType(Client client, String index, String type) throws Exception {
if (client == null) {
client = getSettingClient();
}
TypesExistsAction action = TypesExistsAction.INSTANCE;
TypesExistsRequestBuilder requestBuilder = new TypesExistsRequestBuilder(client, action, index);
requestBuilder.setTypes(type);
TypesExistsResponse response = requestBuilder.get();
return response.isExists();
}
however, the method of addIndexType is not effected, the type is not create .
I don't know how to create type ?

You can create types when you create the index by providing a proper mapping configuration. Alternatively a type gets created when you index a document of a certain type. However the first suggestion is the better one, because then you can control the full mapping of that type instead of relying on dynamic mapping.

You can set types in the following way:
// JSON schema is the JSON mapping which you want to give for the index.
JSONObject builder = new JSONObject().put(type, JSONSchema);
// And then just fire the below command
client.admin().indices().preparePutMapping(indexName)
.setType(type)
.setSource(builder.toString(), XContentType.JSON)
.execute()
.actionGet();

Related

Elastic Search Custom Create Index using Java High Level Rest Client

createIndexWithCustomMappings(String indexName, String fieldsMapping){CreateIndexResponse createIndexResponse = client.admin().indices()
.prepareCreate(index).setSettings(fieldsMapping).execute().get();}
I have a code which creates the index in elastic search in a spring boot application. Currently the client used is transport client which is now depreciated as per elastic search documentation and now is replaced by High Level Rest Client.
For Creating Index using High Level Rest Client. I have seen this code.
CreateIndexRequest request = new CreateIndexRequest(indexName);
CreateIndexResponse createIndexResponse = client.indices().create(request, RequestOptions.DEFAULT);
Here fieldsMapping is a json file which has details regarding analyzer, tokenizer, filter and is passed as String to this method. I am not able to find methods in java rest high level client to incorporate setSettings(fieldsMapping).execute().get() as done above with transport client.
Any Idea on how this setSettings(fieldMappings) can work java high level rest client
You can use the implementation from the ElasticsearchRestTemplate itself.
Using Elasticsearch 6.x:
This is how you create the index with settings:
#Override
public boolean createIndex(String indexName, Object settings) {
CreateIndexRequest request = new CreateIndexRequest(indexName);
if (settings instanceof String) {
request.settings(String.valueOf(settings), Requests.INDEX_CONTENT_TYPE);
} else if (settings instanceof Map) {
request.settings((Map) settings);
} else if (settings instanceof XContentBuilder) {
request.settings((XContentBuilder) settings);
}
try {
return client.indices().create(request, RequestOptions.DEFAULT).isAcknowledged();
} catch (IOException e) {
throw new ElasticsearchException("Error for creating index: " + request.toString(), e);
}
}
This is how you update the mappings for the index:
#Override
public boolean putMapping(String indexName, String type, Object mapping) {
Assert.notNull(indexName, "No index defined for putMapping()");
Assert.notNull(type, "No type defined for putMapping()");
PutMappingRequest request = new PutMappingRequest(indexName).type(type);
if (mapping instanceof String) {
request.source(String.valueOf(mapping), XContentType.JSON);
} else if (mapping instanceof Map) {
request.source((Map) mapping);
} else if (mapping instanceof XContentBuilder) {
request.source((XContentBuilder) mapping);
}
try {
return client.indices().putMapping(request, RequestOptions.DEFAULT).isAcknowledged();
} catch (IOException e) {
throw new ElasticsearchException("Failed to put mapping for " + indexName, e);
}
}
Using Elasticsearch 7.x:
You need to create a variable IndexCoordinates.of("indexName")
Get the IndexOperations from the ElasticSearchTemplate for that index
Create your index via the indexOperations variable like this:
IndexOperations indexOperations = elasticsearchTemplate.indexOps(indexCoordinates);
String indexSettings = "" //Pass json string here
String mappingJson = "" //Pass json string here
Document mapping = Document.parse(mappingJson);
Map<String, Object> settings = JacksonUtil.fromString(indexSettings, new TypeReference<>() {});
indexOperations.create(settings, mapping);
indexOperations.refresh(); //(Optional) refreshes the doc count
It really depends on which spring-data-elasticsearch you are using. Feel free to checkout the documentation as well:
https://docs.spring.io/spring-data/elasticsearch/docs/current/reference/html/#new-features
Hope this helps with your elasticsearch journey! Feel free to ask more questions regarding the java implementation :)

How will neo4j jdbc (bolt) handle queries that return a list of nodes?

In neo4j jdbc (bolt), Node is returned as Map , but if you make a query that returns a list of Nodes, getObject () will return a list of InternalNodes. Entities in this list can not be identified by type instanceof, so reflection will identify the node by type name and you will get the value by calling the method by reflection.You can get the value by doing the following, but is this approach correct? rs is ResultSet.entity is return value of this method.
Object columnObject = rs.getObject(columnName);
if (columnObject instanceof List<?>){
List<Map<String,Object>> objectValue = arrayList();
Array columnArray = rs.getArray(columnName);
Object[] columnArrayValues = (Object[]) columnArray.getArray();
for (int iTmp = 0; iTmp < columnArrayValues.length; iTmp++){
Map<String, Object> colArrayItemMap = new HashMap<>();
Object colItemObj = columnArrayValues[iTmp];
Class colItemClass = colItemObj.getClass();
if (colItemClass.getName().equals("org.neo4j.driver.internal.InternalNode")){
Method asMap = colItemClass.getMethod("asMap");
Method getId = colItemClass.getMethod("id");
Method getLabels = colItemClass.getMethod("labels");
colArrayItemMap.put("_id", getId.invoke(colItemObj));
colArrayItemMap.put("_labels", getLabels.invoke(colItemObj));
colArrayItemMap.putAll((Map<? extends String, ?>) asMap.invoke(colItemObj));
} else {
colArrayItemMap.put("_raw", columnArrayValues[iTmp]);
}
objectValue.add(colArrayItemMap);
}
((Map) entity).put(propertyName, objectValue);
} else {
((Map) entity).put(propertyName, columnObject);
}
Such queries are generated by such cypher statements.Such queries are generated by such cypher statements.
MATCH
(input:Input),
(output:Output)
WITH input, output
MATCH
(input)-[:INPUT*1]->(in),
(out)-[:OUTPUT*1]->(output),
g = (in)-[connect:CONNECT*0..5]->(out)
RETURN
input, output, extract(x IN nodes(g)|x) as nodes
It was for different class loaders that we can not identify with the instanceof operator.Since the jdbc driver was placed in Tomcat / lib, it was judged to be different from the class loaded by the application.
In any case, it will be provided by converting List to List or until getResults() is supported as the return value of getArray() It is thought that it is necessary to write.

NHibernate returning results from Web API

I'm using Nhibernate to fetch a collection which has lazy loaded properties but am having trouble returning it as the Serializer tries to serialize the lazy property after the Nhibernate Session is closed. So is there a way to tell NHibernate to give me a true list in which if there were unloaded lazy collections that it would just leave them empty?
For example
IEnumerable<Store> stores = StoreService.GetList(1, 2);
Store has a one-to-many mapping with StockItems which is set to lazy load which then causes the serialization error. I tried
List<Store> stores_r = stores.ToList();
but I get the same thing. Is there something that will traverses through the list and fetches one-to-one relations and ignores one-to-many lazy loading and return a finished list?
Thanks
EDIT:Solution I've tried but still not working
public class NHibernateContractResolver: DefaultContractResolver
{
protected override JsonContract CreateContract(Type objectType)
{
if (typeof(NHibernate.Proxy.INHibernateProxy).IsAssignableFrom(objectType) || typeof(NHibernate.Proxy.ILazyInitializer).IsAssignableFrom(objectType))
{
var oType = objectType.GetInterfaces().FirstOrDefault(i => i.FullName.StartsWith("Navace.Models"));
return oType != null ? base.CreateContract(oType) : base.CreateContract(objectType.BaseType);
}
return base.CreateContract(objectType);
}
protected override List<MemberInfo> GetSerializableMembers(Type objectType)
{
if (typeof(NHibernate.Proxy.INHibernateProxy).IsAssignableFrom(objectType))
{
return base.GetSerializableMembers(objectType.BaseType);
}
else
{
return base.GetSerializableMembers(objectType);
}
}
}
Try to manually serialize so I can use what's happening
IEnumerable<Store> stores = StoreService.GetList(1, 2);
Store> storess = stores.ToList();
JsonSerializer sr = new JsonSerializer
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
ContractResolver = new NHibernateContractResolver(),
NullValueHandling = NullValueHandling.Ignore,
};
StringWriter stringWriter = new StringWriter();
JsonWriter jsonWriter = new Newtonsoft.Json.JsonTextWriter(stringWriter);
sr.Serialize(jsonWriter, storess);
string res = stringWriter.ToString();
The error I get is
Outer exception : Error getting value from 'datedcost' on 'PartProxy'.
Inner exception: No row with the given identifier exists[Navace.Models.Part#0]
My recommendation is to return view models instead of domain models. It's confusing to return an empty collection property when it may have data. By converting the domain model to a view model (using LINQ Select or AutoMapper), the serializer will only touch (and attempt to lazy load) the properties in the view model.

Web API OData custom query issue

I am new to Web API, Entity Framework and OData. I asked a similar question in another forum but haven't gotten a relevant response.
We have a OData compliant web api service for use in Salesforce. We have a custom complex query in Oracle that we need to expose.
I am not sure how to use a custom query like we want to also allow for odata parameter filtering to occur? ($filter, $top, $skip, etc) For example, when a $filter is used i want to apply that filter to the custom query and then send it back to the database to have it return the result set. How can i do this?
The issue i seem to have is that I can see the parameters as they come in but they are not translating to the query being passed to oracle. It seems that it will fire the query returning the full result set and then apply the parameters. This is very slow as the result set is very large.
I am hoping 2 figure out 2 things
1. How can i use custom sql and apply odata parameters to the underlying query?
2. When using EF or a custom query, how can i apply odata parameters to the query so that when the query is sent to the database that the $filter parameter, for example, is included in the query? I don't want the full result returned then apply the filter.
Can anyone give me some pointers on how to make this happen?
private static ODataValidationSettings _validationSettings = new ODataValidationSettings();
//public IHttpActionResult GetName()
//{ }
// GET: odata/ShareData
[ODataRoute("Orders")]
[EnableQuery(PageSize = 50)]
public IHttpActionResult GetOrders(ODataQueryOptions<Orders> queryOptions)
{
// validate the query.
try
{
queryOptions.Validate(_validationSettings);
}
catch (ODataException ex)
{
return BadRequest(ex.Message);
}
try
{
string connectionString = ConfigurationManager.ConnectionStrings["DNATestConnectionString"].ConnectionString;
var items = GetDataItems(connectionString);
return Ok<IEnumerable<Orders>>(items);
}
catch (Exception ex)
{
return StatusCode(HttpStatusCode.InternalServerError);
}
}
#region Load Data Methods
private static List<Orders> GetDataItems(string connectionString)
{
List<Orders> items = new List<Orders>();
using (OracleConnection con = new OracleConnection(connectionString))
{
con.Open();
using (OracleCommand cmd = con.CreateCommand())
{
cmd.CommandText = "select po_header_id, segment1, vendor_id, vendor_site_id from po_headers_all where vendor_id=4993";
using (OracleDataReader rdr = cmd.ExecuteReader())
{
while (rdr.Read())
items.Add(ToOrders(rdr));
}
}
}
return items;
}
private static Orders ToOrders(OracleDataReader rdr)
{
Orders data = new Orders();
data.VENDOR_ID = ToInt32(rdr, "VENDOR_ID");
data.VENDOR_SITE_ID = ToInt32(rdr, "VENDOR_SITE_ID");
data.PO_HEADER_ID = ToInt32(rdr, "PO_HEADER_ID");
data.SEGMENT1 = Convert.ToString(rdr["SEGMENT1"]);
return data;
}
private static int ToInt32(OracleDataReader rdr, string name)
{
int index = rdr.GetOrdinal(name);
return rdr.IsDBNull(index) ? 0 : Convert.ToInt32(rdr[index]);
}
#endregion
I don't think this is possible.
How can i use custom sql and apply odata parameters to the underlying query?
As far as I'm aware, you can't. The whole point of the OData library is that it needs to work off an IQueryable. By using custom SQL in a string like you have in your example, you can't combine it with the OData parameters that are being passed in.
One approach would be to have your custom SQL in a SQL view, then add the SQL view to your EF model in the same way as you would add a table - it will be represented as a DbSet just like tables are.
You can then get an IQueryable to represent the dataset and then apply the OData parameters as follows:
public IHttpActionResult GetOrders(ODataQueryOptions<OrdersView> queryOptions)
{
IQueryable<OrdersView> allData = // ... get the DbSet from entity framework...
// this will apply the OData query to the data set and only pull the data you want from the database
var filteredResults = queryOptions.ApplyTo(allData) as IQueryable<OrdersView>;
return Ok<IQueryable<OrdersView>>(filteredResults);
}

PrepareResponse().AsActionResult() throws unsupported exception DotNetOpenAuth CTP

Currently I'm developing an OAuth2 authorization server using DotNetOpenAuth CTP version. My authorization server is in asp.net MVC3, and it's based on the sample provided by the library. Everything works fine until the app reaches the point where the user authorizes the consumer client.
There's an action inside my OAuth controller which takes care of the authorization process, and is very similar to the equivalent action in the sample:
[Authorize, HttpPost, ValidateAntiForgeryToken]
public ActionResult AuthorizeResponse(bool isApproved)
{
var pendingRequest = this.authorizationServer.ReadAuthorizationRequest();
if (pendingRequest == null)
{
throw new HttpException((int)HttpStatusCode.BadRequest, "Missing authorization request.");
}
IDirectedProtocolMessage response;
if (isApproved)
{
var client = MvcApplication.DataContext.Clients.First(c => c.ClientIdentifier == pendingRequest.ClientIdentifier);
client.ClientAuthorizations.Add(
new ClientAuthorization
{
Scope = OAuthUtilities.JoinScopes(pendingRequest.Scope),
User = MvcApplication.LoggedInUser,
CreatedOn = DateTime.UtcNow,
});
MvcApplication.DataContext.SaveChanges();
response = this.authorizationServer.PrepareApproveAuthorizationRequest(pendingRequest, User.Identity.Name);
}
else
{
response = this.authorizationServer.PrepareRejectAuthorizationRequest(pendingRequest);
}
return this.authorizationServer.Channel.PrepareResponse(response).AsActionResult();
}
Everytime the program reaches this line:
this.authorizationServer.Channel.PrepareResponse(response).AsActionResult();
The system throws an exception which I have researched with no success. The exception is the following:
Only parameterless constructors and initializers are supported in LINQ to Entities.
The stack trace: http://pastebin.com/TibCax2t
The only thing I've done differently from the sample is that I used entity framework's code first approach, an I think the sample was done using a designer which autogenerated the entities.
Thank you in advance.
If you started from the example, the problem Andrew is talking about stays in DatabaseKeyNonceStore.cs. The exception is raised by one on these two methods:
public CryptoKey GetKey(string bucket, string handle) {
// It is critical that this lookup be case-sensitive, which can only be configured at the database.
var matches = from key in MvcApplication.DataContext.SymmetricCryptoKeys
where key.Bucket == bucket && key.Handle == handle
select new CryptoKey(key.Secret, key.ExpiresUtc.AsUtc());
return matches.FirstOrDefault();
}
public IEnumerable<KeyValuePair<string, CryptoKey>> GetKeys(string bucket) {
return from key in MvcApplication.DataContext.SymmetricCryptoKeys
where key.Bucket == bucket
orderby key.ExpiresUtc descending
select new KeyValuePair<string, CryptoKey>(key.Handle, new CryptoKey(key.Secret, key.ExpiresUtc.AsUtc()));
}
I've resolved moving initializations outside of the query:
public CryptoKey GetKey(string bucket, string handle) {
// It is critical that this lookup be case-sensitive, which can only be configured at the database.
var matches = from key in db.SymmetricCryptoKeys
where key.Bucket == bucket && key.Handle == handle
select key;
var match = matches.FirstOrDefault();
CryptoKey ck = new CryptoKey(match.Secret, match.ExpiresUtc.AsUtc());
return ck;
}
public IEnumerable<KeyValuePair<string, CryptoKey>> GetKeys(string bucket) {
var matches = from key in db.SymmetricCryptoKeys
where key.Bucket == bucket
orderby key.ExpiresUtc descending
select key;
List<KeyValuePair<string, CryptoKey>> en = new List<KeyValuePair<string, CryptoKey>>();
foreach (var key in matches)
en.Add(new KeyValuePair<string, CryptoKey>(key.Handle, new CryptoKey(key.Secret, key.ExpiresUtc.AsUtc())));
return en.AsEnumerable<KeyValuePair<string,CryptoKey>>();
}
I'm not sure that this is the best way, but it works!
It looks like your ICryptoKeyStore implementation may be attempting to store CryptoKey directly, but it's not a class that is compatible with the Entity framework (due to not have a public default constructor). Instead, define your own entity class for storing the data in CryptoKey and your ICryptoKeyStore is responsible to transition between the two data types for persistence and retrieval.

Resources