In Spring JPA, Date Between JPQL is not work - spring

I need to get data what contained between the start and end dates.
example, in dbms(I use mariadb), I have table like that.
feed
pk
start_day
end_day
1
2022-12-20
2022-12-22
2
2022-12-19
2022-12-23
3
2022-12-24
2022-12-25
So, I use query like that
SELECT * FROM feed AS f
WHERE "2022-12-21" BETWEEN f.start_day AND f.end_day;
And I print
pk
start_day
end_day
1
2022-12-20
2022-12-22
2
2022-12-19
2022-12-23
But In Spring JPQL, I use customRepository like that
public List<FeedEntity> myFeedMethod(String today) {
final String SQL = "SELECT f FROM feed AS f "+
"WHERE :today BETWEEN f.start_day AND f.end_day";
List<FeedEntity> result = em.createQuery(SQL, FeedEntity.class)
.setParameter("today", today)
.getResultList();
em.clear();
return result;
}
But It return Empty Data.
How can I get data by date?

Related

Snappydata - sql put into on jobserver don't aggregate values

I'm trying to create a jar to run on snappy-job shell with streaming.
I have aggregation function and it works in windows perfectly. But I need to have a table with one value for each key. Base on a example from github a create a jar file and now I have problem with put into sql command.
My code for aggregation:
val resultStream: SchemaDStream = snsc.registerCQ("select publisher, cast(sum(bid)as int) as bidCount from " +
"AggrStream window (duration 1 seconds, slide 1 seconds) group by publisher")
val conf = new ConnectionConfBuilder(snsc.snappySession).build()
resultStream.foreachDataFrame(df => {
df.write.insertInto("windowsAgg")
println("Data received in streaming window")
df.show()
println("Updating table updateTable")
val conn = ConnectionUtil.getConnection(conf)
val result = df.collect()
val stmt = conn.prepareStatement("put into updateTable (publisher, bidCount) values " +
"(?,?+(nvl((select bidCount from updateTable where publisher = ?),0)))")
result.foreach(row => {
println("row" + row)
val publisher = row.getString(0)
println("publisher " + publisher)
val bidCount = row.getInt(1)
println("bidcount : " + bidCount)
stmt.setString(1, publisher)
stmt.setInt(2, bidCount)
stmt.setString(3, publisher)
println("Prepared Statement after bind variables set: " + stmt.toString())
stmt.addBatch()
}
)
stmt.executeBatch()
conn.close()
})
snsc.start()
snsc.awaitTermination()
}
I have to update or insert to table updateTable, but during update command the current value have to added to the one from stream.
And now :
What I see when I execute the code:
select * from updateTable;
PUBLISHER |BIDCOUNT
--------------------------------------------
publisher333 |10
Then I sent message to kafka:
1488487984048,publisher333,adv1,web1,geo1,11,c1
and again select from updateTable:
select * from updateTable;
PUBLISHER |BIDCOUNT
--------------------------------------------
publisher333 |11
the Bidcount value is overwritten instead of added.
But when I execute the put into command from snappy-sql shell it works perfectly:
put into updateTable (publisher, bidcount) values ('publisher333',4+
(nvl((select bidCount from updateTable where publisher =
'publisher333'),0)));
1 row inserted/updated/deleted
snappy> select * from updateTable;
PUBLISHER |BIDCOUNT
--------------------------------------------
publisher333 |15
Could you help me with this case? Mayby someone has other solution for insert or update value using snappydata ?
Thank you in advanced.
bidCount value is read from tomi_update table in case of streaming but it's getting read from updateTable in case of snappy-sql. Is this intentional? May be you wanted to use updateTable in both the cases ?

Nested query with spring data jpa and eclipselink

I am working with spring ,spring data jpa ,and eclipselink i am using a Repository to define my queries , i have a nested query , its working in mysql directly but not in my Repository
#Query("Select don from Donnee don where don.RowNumber in "
+ "(Select DISTINCT(do.RowNumber) from Donnee do"
+ " WHERE char_length(do.valeur) > 14 and Substring(do.valeur, 7, 4) = ?1 )" )
public List<Donnee> DonnesparDate(String date);
i get this error : [101, 164] The expression is not a valid conditional expression.
any help please
try with entity manager
String query = "Select don from Donnee don where don.RowNumber in "
+ "(Select DISTINCT(do.RowNumber) from Donnee do"
+ " WHERE char_length(do.valeur) > 14 and Substring(do.valeur, 7, 4) = ?1 )";
EntityManager em = this.emf.createEntityManager();
Query result = em.createQuery(query);
List results = result.getResultList();
and also autowire EntityManager like this : #Autowired
private EntityManagerFactory emf;

jdbcTemplate.queryForList returns list of Map where all column values are NULL

Any calls using jdbcTemplate.queryForList returns a list of Maps which have NULL values for all columns. The columns should've had string values.
I do get the correct number of rows when compared to the result I get when I run the same query in a native SQL client.
I am using the JDBC ODBC bridge and the database is MS SQL server 2008.
I have the following code in my DAO:
public List internalCodeDescriptions(String listID) {
List rows = jdbcTemplate.queryForList("select CODE, DESCRIPTION from CODE_DESCRIPTIONS where LIST_ID=? order by sort_order asc", new Object[] {listID});
//debugcode start
try {
Connection conn1 = jdbcTemplate.getDataSource().getConnection();
Statement stat = conn1.createStatement();
boolean sok = stat.execute("select code, description from code_descriptions where list_id='TRIGGER' order by sort_order asc");
if(sok) {
ResultSet rs = stat.getResultSet();
ResultSetMetaData rsmd = rs.getMetaData();
String columnname1=rsmd.getColumnName(1);
String columnname2=rsmd.getColumnName(2);
int type1 = rsmd.getColumnType(1);
int type2 = rsmd.getColumnType(2);
String tn1 = rsmd.getColumnTypeName(1);
String tn2 = rsmd.getColumnTypeName(2);
log.debug("Testquery gave resultset with:");
log.debug("Column 1 -name:" + columnname1 + " -typeID:"+type1 + " -typeName:"+tn1);
log.debug("Column 2 -name:" + columnname2 + " -typeID:"+type2 + " -typeName:"+tn2);
int i=1;
while(rs.next()) {
String cd=rs.getString(1);
String desc=rs.getString(2);
log.debug("Row #"+i+": CODE='"+cd+"' DESCRIPTION='"+desc+"'");
i++;
}
} else {
log.debug("Query execution returned false");
}
} catch(SQLException se) {
log.debug("Something went haywire in the debug code:" + se.toString());
}
log.debug("Original jdbcTemplate list result gave:");
Iterator<Map<String, Object>> it1= rows.iterator();
while(it1.hasNext()) {
Map mm = (Map)it1.next();
log.debug("Map:"+mm);
String code=(String)mm.get("CODE");
String desc=(String)mm.get("description");
log.debug("CODE:"+code+" : "+desc);
}
//debugcode end
return rows;
}
As you can see I've added some debugging code to list the results from the queryForList and I also obtain the connection from the jdbcTemplate object and uses that to sent the same query using the basic jdbc methods (listID='TRIGGER').
What is puzzling me is that the log outputs something like this:
Testquery gave resultset with:
Column 1 -name:code -typeID:-9 -typeName:nvarchar
Column 2 -name:decription -typeID:-9 -typeName:nvarchar
Row #1: CODE='C1' DESCRIPTION='BlodoverxF8rin eller bruk av blodprodukter'
Row #2: CODE='C2' DESCRIPTION='Kodetilfelle, hjertestans/respirasjonstans'
Row #3: CODE='C3' DESCRIPTION='Akutt dialyse'
...
Row #58: CODE='S14' DESCRIPTION='Forekomst av hvilken som helst komplikasjon'
...
Original jdbcTemplate list result gave:
Map:(CODE=null, DESCRIPTION=null)
CODE:null : null
Map:(CODE=null, DESCRIPTION=null)
CODE:null : null
...
58 repetitions total.
Why does the result from the queryForList method return NULL in all columns for every row? How can I get the result I want using jdbcTemplate.queryForList?
The xF8 should be the letter ΓΈ so I have some encoding issues, but I can't see how that may cause all values - also strings not containing any strange letters (se row#2) - to turn into NULL values in the list of maps returned from the jdbcTemplate.queryForList method.
The same code ran fine on another server against a MySQL Server 5.5 database using the jdbc driver for MySQL.
The issue was resolved by using the MS SQL Server jdbc driver rather than using the JDBC ODBC bridge. I don't know why it didn't work with the bridge though.

prepared statement in multithreading

I have used MERGE command in my prepared statement,and when i was executed it in a single threaded env,its working fine,But in multi threaded environment,it causes some problem.That is data is duplicated,that is if i have 5 threads,each record will duplicate 5 times.I think there is no lock in db to help the thread.
My code:
//db:oracle
sb.append("MERGE INTO EMP_BONUS EB USING (SELECT 1 FROM DUAL) on (EB.EMP_id = ?) WHEN MATCHED THEN UPDATE SET TA =?,DA=?,TOTAL=?,MOTH=? WHEN NOT MATCHED THEN "+ "INSERT (EMP_ID, TA, DA, TOTAL, MOTH, NAME)VALUES(?,?,?,?,?,?) ");
//sql operation,calling from run() method
public void executeMerge(String threadName) throws Exception {
ConnectionPro cPro = new ConnectionPro();
Connection connE = cPro.getConection();
connE.setAutoCommit(false);
System.out.println(sb.toString());
System.out.println("Threadname="+threadName);
PreparedStatement pStmt= connE.prepareStatement(sb.toString());
try {
count = count + 1;
for (Employee employeeObj : employee) {//datalist of employee
pStmt.setInt(1, employeeObj.getEmp_id());
pStmt.setDouble(2, employeeObj.getSalary() * .10);
pStmt.setDouble(3, employeeObj.getSalary() * .05);
pStmt.setDouble(4, employeeObj.getSalary()
+ (employeeObj.getSalary() * .05)
+ (employeeObj.getSalary() * .10));
pStmt.setInt(5, count);
pStmt.setDouble(6, employeeObj.getEmp_id());
pStmt.setDouble(7, employeeObj.getSalary() * .10);
pStmt.setDouble(8, employeeObj.getSalary() * .05);
pStmt.setDouble(9, employeeObj.getSalary()
+ (employeeObj.getSalary() * .05)
+ (employeeObj.getSalary() * .10));
pStmt.setInt(10, count);
pStmt.setString(11, threadName);
// pStmt.executeUpdate();
pStmt.addBatch();
}
pStmt.executeBatch();
connE.commit();
} catch (Exception e) {
connE.rollback();
throw e;
} finally {
pStmt.close();
connE.close();
}
}
if employee.size=5, thread count =5,after execution i would get 25 records instead of 5
If there is no constraint (i.e. a primary key or a unique key constraint on the emp_id column in emp_bonus), there would be nothing to prevent the database from allowing each thread to insert 5 rows. Since each database session cannot see uncommitted changes made by other sessions, each thread would see that there was no row in emp_bonus with the emp_id the thread is looking for (I'm assuming that employeeObj.getEmp_id() returns the same 5 emp_id values in each thread) so each thread would insert all 5 rows leaving you with a total of 25 rows if there are 5 threads. If you have a unique constraint that prevents the duplicate rows from being inserted, Oracle will allow the other 4 threads to block until the first thread commits allowing the subsequent threads to do updates rather than inserts. Of course, this will cause the threads to be serialized defeating any performance gains you would get from running multiple threads.

LINQ to SQL array always returns first item?

i am using LINQ to SQL. My Database has 3 columns Ref, Due_amount, Due_Date.
Data may look like this, for example.
10 02/08/2009 00:00:00 175.0000
10 02/09/2009 00:00:00 175.0000
10 02/10/2009 00:00:00 175.0000
10 02/11/2009 00:00:00 175.0000
10 02/12/2009 00:00:00 175.0000
10 02/01/2010 00:00:00 175.0000
My code below, returns 6 elements and works, however the Date is always 02/08/2009? if i say change row 2's amount to 150.0000 it then returns the correct date of 02/09/2009?
Any ideas?
private static void PopulateInstalments(string Ref, ResponseMessage responseMsg)
{
using (DAO dbContext = new DAO())
{
IEnumerable<profile> instalments = (from instalment in dbContext.profile
where instalment.ref == Ref
select instalment);
foreach (profile instalment in instalments)
{
if (responseMsg.Instalments == null)
responseMsg.Instalments = new ArrayOfInstalments();
Instalment tempInstalment = new Instalment();
tempInstalment.DueAmount = instalment.Due_amount;
tempInstalment.DueDate = instalment.Due_date == null ? "" : instalment.Due_date.ToString();
responseMsg.Instalments.Add(tempInstalment);
}
}
}
Thanks
Richard
ensure primary key column is set in source (SQL Server Database in this case)

Resources