I am using derby embedded database for my Maven test cases. And I am not able to use SUBSTR inside TO_DATE, its giving error.
Actually it was used for original application which is connected to oracle db. Now I am writing Maven test cases, with derby embedded db and unable to execute this one. The thing is I should not modify the original query and I need some workaround to rectify this issue.
My query will be like this.
SELECT TO_DATE (SUBSTR (testdate, 1, 9), 'DD-MM-RR') FROM testtable
Please help me on this issue. Thanks.
Substr cannot be used with DATE. You cannot not override it. SQL is not easily re-used between database. The easiest part is to change the sql.
The harder part is to step deep into derby:
If you want to make this work without changing the query, you could wrap the connection or the DataSource and change the sql on a lower Level.
For this to work you need access to the Connection Object in your test:
Connection wrapped = new WrappedConnection(originalConnection);
This is a short example of a wrapped Connection, with a migrate function (this is basically the Adapter Pattern:
public class WrappedConnection implements Connection
{
private final Connection origConnection;
public WrappedConnection(Connection rv)
{
origConnection = rv;
}
//I left out other methods, that you have to implement accordingly
public PreparedStatement prepareStatement(String pSql) throws SQLException
{
//this you have to implement yourself
//this will serve as a bridge between oracle and derby
String sql = migrate(sql);
return sql;
}
}
The migrate function could to something like this:
public String migrate(String sql)
{
return sql.replace("SUBSTR", "SUBSTR_DATE");
}
But you would have to create your own Derby Function SUBSTR_DATE.
I can think of 2 options... I don't know how much sense either makes but...
Create a sub class of the original, and (assuming that the line is only used in one method of that class) simply override that single method. and leave the rest of the code the same.
If the class that calls this SQL send the message to a customised 'sendSQLStatement(String sql) tpe method, this would handle all the creation of the statement object, surround with try / catch error handling etc, and return the result set, you could set an overide in the method to check for the DB engine being used.
This info is obtainable from the databaseMetaData.getDatabaseProductName(), or alternatively from the get .getDriverName() method. you then test this string to see if it contains the word 'derby' and if yes send a different type of SQL.
Of course later on down the road you will need to perform a final test to ensure that the original Oracle code still works.
You may even take the opportunity to modify the whole code snippet to make it more DB agnostic (ie cast the value to a string type (or a date from a long) and then do the substring function.
Related
I am new to OData, so any help would be appreciated.
I created a test ASP .Net Web API project to query data from SQL Server using OData and Drapper is used instead of EF. I found that the query filter does not get pushed into the query which is executed on the database.
q1. Does the push down work only with EF ?
q2. would OData work for any source which has a ODBC driver.
code snippet
public class InboundMetaDataController : ODataController
{
DapperContext db = new DapperContext();
[EnableQuery]
public IQueryable<InboundMetaData> Get()
{
return GetInboundMeta();
}
public IEnumerable<InboundMetaData> GetInboundMetaRecords()
{
var query = "SELECT Id, DataSource, Client, DataPath FROM Datalake.InboundMetaData ";
using (var connection = db.CreateConnection())
{
return connection.Query<InboundMetaData>(query).AsEnumerable();
}
}
public IQueryable<InboundMetaData> GetInboundMeta()
{
IEnumerable<InboundMetaData> qry = GetInboundMetaRecords();
return qry.AsQueryable();
}
}
public class DapperContext
{
private readonly string _connectionString = #"server=server1; database=test_db; Integrated Security=true; Encrypt=false";
public IQueryable<InboundMetaData> InboundMetaData { get; internal set; }
public IDbConnection CreateConnection()
=> new SqlConnection(_connectionString);
}
Thanks
Manoj George
The .Net OData implementation translates the incoming OData query into a LINQ expression tree. Unfortunately this can't easily be mapped into a Dapper query because it is not translated directly into an SQL query string.
q1. Does the push down work only with EF ?
So the direct answer is NO, this push down doesn't only work with EF, OOTB it will only work with a provider that supports IQueryable expression trees. otherwise you will need to manually parse the expression to SQL.
q2. would OData work for any source which has a ODBC driver.
You can make it work with any backend provider, but depending on the implementation you might have to do a lot of mapping or query building work yourself.
Technically, EF can be made to work with ODBC drivers, the ability to be vendor agnostic is often a reason to use EF in the first place. You can write your own custom implementation when needed, and for many ODBC drivers you might need to.
In regard to Dapper however, Dapper does not support IQueryable. Other users have found specific solution to this: Dapper, ODATA, and IQueryable in ASP.NET by iterating through the expression tree to build the SQL.
The problem with a solution like that, is that you've now manually replicated similar logic to what EF provides for you, and it is likely to take longer to execute than EF that has been specifically optimised for this type of processing. So the only argument that you might have to use Dapper has been invalidated.
In most implementations OData using Dapper will either have significantly reduced query features and possibly will not support aggregates or expansions, but will take longer to execute or will consume significantly more operational memory than if you had used Entity Framework as the ORM.
this is the line that first breaks the concept:
return connection.Query<InboundMetaData>(query).AsEnumerable();
At that point, before the request arguments are evaluated, the raw query without a filter is loaded into memory. .AsEnumerable() is the same as using .ToList() or .ToArray() in terms of moving the data execution into the API and out of the underlying data store.
This is the second anti-pattern in the same method:
using (var connection = db.CreateConnection())
{
return connection. Query<InboundMetaData>(query).AsEnumerable();
}
Because the DB Connection is closed before the controller method has completed execution, even if the output supported an IQueryable expression tree, the EnableQueryAttribute can only operate on the data that is in memory. If the query had not yet been executed, the whole call would fail because the connection has been closed before the query was executed.
For that reason, in OData controllers be tend to declare the DbContext or DbConnection for the lifetime of the request and do not dispose of it before the response content has been serialized.
If you're interested, I've written up a blog article that might help: Should I use Dapper?
I am using spring-data-elasticsearch (latest version) along with a docker instance of elasticsearch (latest version), and I want to calculate a field on all results that are returned from my repository after a query. I do not want this information in the repository, because it is sometimes query dependent, and sometimes environment dependent. For example, if we perform a query, I want to generate a URL that includes the query terms as query parameters in the URL that I want to enrich the result with. There are some other cases, too. I have tried creating a spring data custom reading converter that accepts the whole document object. I can see that it is recognized when the application starts, but it is never invoked. How can I either project a field with a custom value, or enrich the returned documents with a contextually calculated value?
I first thought about AfterConvertCallback as well like Chin commented, but in a callback you have no context of the query that was run to get the entity, so you cannot use things like query terms to build something.
I would add the property - let's name it url of type String here - to the entity and mark it with the org.springframework.data.annotation.Transient annotation to prevent it from being stored.
Then in the method where you do the search, either using ElasticsearchOperations or a repository, postprocess the returned entites (code not tested, just written down here):
SearchHits<Entity> searchHits = repository.findByFoo(String fooValue);
searchHits.getSearchHits().forEach(searchHit -> {
searchHit.getContent().setUrl(someValueDerivedFromEnvironemtAndQuery);
});
After that proceed using the SearchHits.
I like a hybrid approach of combining the answers from both #ChinHuang and #PJMeisch. Both answers have their applicability, depending on the context or situation. I like Chin Huang's suggestion for instance-based information, where you would need things like configuration values. I also agree that PJ Meisch is correct in his concern that this does not give you access to the immediate query, so I like his idea of intercepting/mapping the values when the data is being returned from the data store. I appreciate the great information from both people, because this combination of both approaches is a solution that I am happy with.
I prefer to use a repository interface wherever possible, because many people incorrectly mix business logic into their repositories. If I want custom implementation, then I am forced to really think about it, because I have to create an "Impl" class to achieve it. This is not the gravest of errors, but I always accompany a repository with a business service that is responsible for any data grooming, or any programmatic action that is not strictly retrieval, or persistence, of data.
Here is the part of my module configuration where I create the custom AfterConvertCallback. I set the base URL in the onAfterConvert method:
#Bean
AfterConvertCallback<BookInfo> bookInfoAfterConvertCallback() {
return new BookInfoAfterConvertCallback(documentUrl);
}
static class BookInfoAfterConvertCallback implements AfterConvertCallback<BookInfo> {
private final String documentUrl;
public BookInfoAfterConvertCallback(String documentUrl) {
this.documentUrl = documentUrl;
}
#Override
public BookInfo onAfterConvert(final BookInfo entity, final Document document, final IndexCoordinates indexCoordinates) {
entity.setUrl(String.format("%s?id=%d", documentUrl, entity.getId()));
return entity;
}
}
In the data service that invokes the repository query, I wrote a pair of functions that creates the query param portion of the URL so that I can append it in any applicable method that uses the auto-wired repository instance:
/**
* Given a term, encode it so that it can be used as a query parameter in a URL
*/
private static final Function<String, String> encodeTerm = term -> {
try {
return URLEncoder.encode(term, StandardCharsets.UTF_8.name());
} catch (UnsupportedEncodingException e) {
log.warn("Could not encode search term for document URL", e);
return null;
}
};
/**
* Given a list of search terms, transform them into encoded URL query parameters and append
* them to the given URL.
*/
private static final BiFunction<List<String>, String, String> addEncodedUrlQueryParams = (searchTerms, url) ->
searchTerms.stream()
.map(term -> String.format("term=%s", encodeTerm.apply(term)))
.filter(Objects::nonNull)
.collect(Collectors.joining("&", url + "&", ""));
This absolutely can all be done in a repository instance, or in its enclosing service. But, when you want to intercept all data that is retrieved, and do something with it that is not specific to the query, then the callback is a great option because it does not incur the maintenance cost of needing to introduce it in every data layer method where it should apply. At query time, when you need to reference information that is only available in the query, it is clearly a matter of introducing this type of code into your data layer (service or repo) methods.
I am adding this as an answer because, even though I didn't realize it at the time that I posted my question, this is two concerns that are separate enough to warrant both approaches. I do not want to claim credit for this answer, so I will not select it as the answer unless you both comment on this, and tell me that you want me to do that.
I read that getOne() is lazy loaded and findOne() fetches the whole entity right away. I've checked the debugging log and I even enabled monitoring on my sql server to see what statements gets executed, I found that both getOne() and findOne() generates and executes the same query. However when I use getOne() the values are initially null (except for the id of course).
So could anyone please tell me, if both methods executes the same query on the database, why should I use one over the other? I'm basically looking for a way to fetch an entity without getting all of its children/attributes.
EDIT1:
Entity code
Dao code:
#Repository
public interface FlightDao extends JpaRepository<Flight, Long> {
}
Debugging log findOne() vs getOne()
EDIT2:
Thanks to Chlebik I was able to identify the problem. Like Chlebik stated, if you try to access any property of the entity fetched by getOne() the full query will be executed. In my case, I was checking the behavior while debugging, moving one line at a time, I totally forgot that while debugging the IDE tries to access object properties for debugging purposes (or at least that's what I think is happening), so debugging triggers the full query execution. I stopped debugging and then checked the logs and everything appears to be normal.
getOne() vs findOne() (This log is taken from MySQL general_log and not hibernate.
Debugging log
No debugging log
It is just a guess but in 'pure JPA' there is a method of EntityManager called getReference. And it is designed to retrieve entity with only ID in it. Its use was mostly for indicating reference existed without the need to retrieve whole entity. Maybe the code will tell more:
// em is EntityManager
Department dept = em.getReference(Department.class, 30); // Gets only entity with ID property, rest is null
Employee emp = new Employee();
emp.setId(53);
emp.setName("Peter");
emp.setDepartment(dept);
dept.getEmployees().add(emp);
em.persist(emp);
I assume then getOne serves the same purpose. Why the queries generated are the same you ask? Well, AFAIR in JPA bible - Pro JPA2 by Mike Keith and Merrick Schincariol - almost every paragraph contains something like 'the behaviour depends on the vendor'.
EDIT:
I've set my own setup. Finally I came to conclusion that if You in any way interfere with entity fetched with getOne (even go for entity.getId()) it causes SQL to be executed. Although if You are using it only to create proxy (eg. for relationship indicator like shown in a code above), nothing happens and there is no additional SQL executed. So I assume in your service class You do something with this entity (use getter, log something) and that is why the output of these two methods looks the same.
ChlebikGitHub with example code
SO helpful question #1
SO helpful question #2
Suppose you want to remove an Entity by id. In SQL you can execute a query like this :
"delete form TABLE_NAME where id = ?".
And in Hibernate, first you have to get a managed instance of your Entity and then pass it to EntityManager.remove method.
Entity a = em.find(Entity.class, id);
em.remove(a);
But this way, You have to fetch the Entity you want to delete from database before deletion. Is that really necessary ?
The method EntityManager.getReference returns a Hibernate proxy without querying the database and setting the properties of your entity. Unless you try to get properties of the returned proxy yourself.
Method JpaRepository.getOne uses EntityManager.getReference method instead of EntityManager.find method. so whenever you need a managed object but you don't really need to query database for that, it's better to use JpaRepostory.getOne method to eliminate the unnecessary query.
If data is not found the table for particular ID, findOne will return null, whereas getOne will throw javax.persistence.EntityNotFoundException.
Both have their own pros and cons. Please see example below:
If data not found is not failure case for you (eg. You are just
verifying if data the data is deleted and success will be data to be
null), you can use findOne.
In another case, you can use getOne.
This can be updated as per your requirements, if you know outcomes.
A project I am working on uses and Oracle database with row level security. I need to be able to invoke call DBMS_APPLICATION_INFO.SET_CLIENT_INFO('userId'); before I can execute any other SQL statements. I am trying to figure out a way to implement this within MyBatis. Several ideas that I had, but were unable to make work, include the following:
Attempt 1
<select id="selectIds" parameterType="string" resultType="Integer">
call DBMS_APPLICATION_INFO.SET_CLIENT_INFO(#{userId});
select id from FOO
</select>
However, you can't two statements within a single JDBC call and MyBatis doesn't have support for JDBC batch statements, or at least not that I could find.
Attempt 2
<select id="selectMessageIds" parameterType="string" resultType="Integer">
<![CDATA[
declare
type ID_TYP is table of AGL_ID.ID_ID%type;
ALL_IDS ID_TYP;
begin
DBMS_APPLICATION_INFO.SET_CLIENT_INFO(#{userId});
select ID bulk collect
into ALL_IDS
from FOO
end;
]]>
</select>
However, that is as far I got because I learned that you can't return data in a procedure, only in a function, so there was no way to return the data.
Attempt 3
I've considered just creating a simple MyBatis statement that will set the client information and it will need to be called before executing statements. This seems the most promising, however, we are using Spring and database connection pooling and I am concerned about race conditions. I want to ensure that the client information won't bleed over and affect other statements because the connections will not get closed, they will get reused.
Software/Framework Version Information
Oracle 10g
MyBatis 3.0.5
Spring 3.0.5
Update
Forgot to mention that I am also using MyBatis Spring 1.0.1
This sounds like a perfect candidate for transactions. You can create a #Transactional service (or DAO) base class that makes the DBMS_APPLICATION function call. All your other service classes could extend the base and call the necessary SQL.
In the base class, you want to make sure that you only call the DBMS_APPLICATION function once. To do this, use the TransactionSynchronizationManager.hasResource() and bindResource() methods to bind a boolean or similar marker value to the current TX. Check this value to determine if you need to make the function call or not.
If the function call exists only for a 'unit of work' in the DB, this should be all you need. If the call exists for the duration of the connection, the base class will need too clean up in a finally block somehow.
Rather than a base class, another possibility would be to use AOP and do the function call before method invocation and the clean up as finally advice. The key here would be to make sure that your interceptor is called after Spring's TransactionInterceptor (i.e. after the tx has started).
One of the safest solution would be to have a specilaized DatSourceUtils
1: http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/jdbc/datasource/DataSourceUtils.html and override doGetConnection(DataSource dataSource) and setClientInfo on connection
Write your own abstraction over SqlMapClientDaoSupport to pass client information.
Assume that we have data inside the DTOObject
public void loginUser(UserDTO)
{
String name = UserDTO.getName();
String pwd = UserDTO.getPassword();
String sql = "select UNAME , PWD from LoginTable where uname='"+name+"' and PWD='"+pwd+"';
}
Please tell me in this code , how can we prevent SQL Injection ?? How can we check for Malicious characters ??
Your best bet is to move SQL from the DTO, where it doesn't belong, to the DAO, where it belongs, and use PreparedStatement there.
Here is the official tutorial on using PreparedStatement in JDBC. There are also plenty of others if you search around.
For the record, I must say that I disagree with the claim that the main advantage of a prepared statement is that it can be (though isn't necessarily) sent to the database in advance. The main advantage is parameters.