I have a Flex application that uses blazeds to connect to a Java backend. Using remoting, I give a call to an API to run a SELECT statement on a table (using conventional JDBC classes) in a Oracle database.
The table has 2 columns:
PRODUCT_CODE of type NVARCHAR2(32) and
DEMAND of type NUMBER(10, 0)
My Java API is as follows:
public List<?> getQueryResult(String query) {
Connection conn = DriverManager.getConnection(connStr, userName, password);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
ArrayList<?> result = new ArrayList<?>();
while(rs.next()) {
Object[] itemArray = new Object[2];
itemArray[0] = rs.getObject(1);
itemArray[1] = rs.getObject(2);
result.add(itemArray);
}
return result;
}
In my Flex side, I have a handler for result event of this remote operation:
private function onResult(e:ResultEvent) : void {
var result:ArrayCollection = (e.result as ArrayCollection);
}
Strangely, the values corresponding to DEMAND column are automatically converted to string (I debugged to find out that in backend, these were BigDecimal)
Any suggestions?
Yes indeed, the BigDecimal from Java is converted to String in ActionScript, as you can read in the developer guide. There is no BigDecimal in ActionScript so it was the only option - you cannot convert any BigDecimal to int or float.
If you are sure that your value represented as NUMBER(10,0) has values in the interval -2,147,483,648 - 2,147,483,647 you can convert it to an int in java - see the code below:
itemArray[1] = ((BigDecimal)rs.getObject(2)).intValue();
Related
With oracle JDBC driver (ojdbc7.jar), when I do x=Resultset.getString("COLUMN_DEF") for a column where the default value is 'N/A' in the database ('N/A' set at table creation, 'N/A' seen with DBeaver tool) the JDBC driver return x="'n/a'" (with postgres and mysql it returns x="N/A").
do you have an idea of why it is in lower case et why it is quoted inside the result string?
thanks in advance for any kind of help on this issue!
PS: how I use the database metadata object:
private static void readColumnMetaData(AMIDBLoader dbLoader, DatabaseMetaData metaData, String internalCatalog, String externalCatalog, String _table, Map<String, String> amiEntityMap, Map<String, String> amiFieldMap) throws SQLException
{
try(ResultSet resultSet = metaData.getColumns(internalCatalog, internalCatalog, _table, "%"))
{
while(resultSet.next())
{
String table = resultSet.getString("TABLE_NAME");
String name = resultSet.getString("COLUMN_NAME");
String type = resultSet.getString("TYPE_NAME");
int size = resultSet.getInt("COLUMN_SIZE");
int digits = resultSet.getInt("DECIMAL_DIGITS");
String def = resultSet.getString("COLUMN_DEF");
code for the table creation:
CREATE TABLE "router_locations" (
"id" NUMBER(*, 0),
"continentCode" VARCHAR2(3) DEFAULT 'N/A',
"countryCode" VARCHAR2(3) DEFAULT 'N/A'
);;
Jerome
We managed to find where the string was modified and the JDBC driver is ok .. thanks for your help.
Can Azure Data Factory transform a string from Oracle (an xml) to a JSon Object to save in a collection in Cosmos as a Object?
I've tried, but I only get a string (json object) as a simple attribute in Cosmos DB.
Per my experience, azure data factory can transfer data, but will not help you do some serialization steps. So, you data stored in cosmos db maybe like "info": "{\"Id\":\"1\",\"Name\":\"AA\",\"Address\":\"HQ - Main Line\"}".
To handle with this solution, I suggest you using Azure Function Cosmos DB Trigger. Please refer to my code:
using System;
using System.Collections.Generic;
using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Newtonsoft.Json.Linq;
namespace ProcessJson
{
public class Class1
{
[FunctionName("DocumentUpdates")]
public static void Run(
[CosmosDBTrigger(databaseName:"db",collectionName: "item", ConnectionStringSetting = "CosmosDBConnection",LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true)]
IReadOnlyList<Document> documents,
TraceWriter log)
{
log.Verbose("Start.........");
String endpointUrl = "https://***.documents.azure.com:443/";
String authorizationKey = "***";
String databaseId = "db";
String collectionId = "import";
DocumentClient client = new DocumentClient(new Uri(endpointUrl), authorizationKey);
for (int i = 0; i < documents.Count; i++)
{
Document doc = documents[i];
String info = doc.GetPropertyValue<String>("info");
JObject o = JObject.Parse(info);
doc.SetPropertyValue("info", o);
client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(databaseId, collectionId, doc.Id), doc);
log.Verbose("Update document Id " + doc.Id);
}
}
}
}
In addition,you could refer to my previous case:Azure Cosmos DB SQL - how to unescape inner json property
Hope it helps you.
I am fairly new to spring ,I am looking to check if a certain email id exists in database or not , using Spring Jdbc Template ,I looked here but could'nt find the proper answer .I am looking something like ,SELECT count(*) from table where email=?
Any help will be appreciated.
You can do something as below if you are using jdbctemplate and new version of spring
private boolean isEmailIdExists(String email) {
String sql = "SELECT count(*) FROM table WHERE email = ?";
int count = jdbcTemplate.queryForObject(sql, new Object[] { email }, Integer.class);
return count > 0;
}
queryForObject method of jdbcTemplate accepts the sql query as the first parameter, second argument is an array of objects for the sql query place holders and the third argument is the expected return value from the sql query.
In this case we only have one place holder and hence I gave the second argument as new Object[] { email } and the result we are expecting is a count which is a Integer and hence I gave it as Integer.class
I kind of got this answer from https://www.mkyong.com/spring/jdbctemplate-queryforint-is-deprecated/
You can go through it if you are interested.
private boolean isEmailIdExists(String email) {
return jdbcTemplate.queryForObject("SELECT EXISTS(SELECT FROM table WHERE email = ?)", Boolean.class, email);
}
http://www.postgresqltutorial.com/postgresql-exists/
Is there any way I can get resultset object from one of jdbctemplate query methods?
I have a code like
List<ResultSet> rsList = template.query(finalQuery, new RowMapper<ResultSet>() {
public ResultSet mapRow(ResultSet rs, int rowNum) throws SQLException {
return rs;
}
}
);
I wanted to execute my sql statement stored in finalQuery String and get the resultset. The query is a complex join on 6 to 7 tables and I am select 4-5 columns from each table and wanted to get the metadata of those columns to transform data types and data to downstream systems.
If it is a simple query and I am fetching form only one table I can use RowMapper#mapRow and inside that maprow method i can call ResultsetExtractor.extractData to get list of results; but in this case I have complex joins in my query and I am trying to get resultset Object and from that resultset metadata...
The above code is not good because for each result it will return same resultset object and I dont want to store them in list ...
Once more thing is if maprow is called for each result from my query will JDBCTemplate close the rs and connection even though my list has reference to RS object?
Is there any simple method like jdbcTemplate.queryForResultSet(sql) ?
Now I have implemented my own ResultSet Extractor to process and insert data into downstream systems
sourceJdbcTemplate.query(finalQuery, new CustomResultSetProcessor(targetTable, targetJdbcTemplate));
This CustomResultSetProcessor implements ResultSetExtractor and in extractData method I am calling 3 different methods 1 is get ColumnTypes form rs.getMetaData() and second is getColumnTypes of target metadata by running
SELECT NAME, COLTYPE, TBNAME FROM SYSIBM.SYSCOLUMNS WHERE TBNAME ='TABLENAME' AND TABCREATOR='TABLE CREATOR'
and in 3rd method I am building the insert statement (prepared) form target columntypes and finally calling that using
new BatchPreparedStatementSetter()
{
#Override
public void setValues(PreparedStatement insertStmt, int i) throws SQLException{} }
Hope this helps to others...
Note that the whole point of Spring JDBC Template is that it automatically closes all resources, including ResultSet, after execution of callback method. Therefore it would be better to extract necessary data inside a callback method and allow Spring to close the ResultSet after it.
If result of data extraction is not a List, you can use ResultSetExtractor instead of RowMapper:
SomeComplexResult r = template.query(finalQuery,
new ResultSetExtractor<SomeComplexResult>() {
public SomeResult extractData(ResultSet) {
// do complex processing of ResultSet and return its result as SomeComplexResult
}
});
Something like this would also work:
Connection con = DataSourceUtils.getConnection(dataSource); // your datasource
Statement s = con.createStatement();
ResultSet rs = s.executeQuery(query); // your query
ResultSetMetaData rsmd = rs.getMetaData();
Although I agree with #axtavt that ResultSetExtractor is preferred in Spring environment, it does force you to execute the query.
The code below does not require you to do so, so that the client code is not required to provide the actual arguments for the query parameters:
public SomeResult getMetadata(String querySql) throws SQLException {
Assert.hasText(querySql);
DataSource ds = jdbcTemplate.getDataSource();
Connection con = null;
PreparedStatement ps = null;
try {
con = DataSourceUtils.getConnection(ds);
ps = con.prepareStatement(querySql);
ResultSetMetaData md = ps.getMetaData(); //<-- the query is compiled, but not executed
return processMetadata(md);
} finally {
JdbcUtils.closeStatement(ps);
DataSourceUtils.releaseConnection(con, ds);
}
}
I have built a common app that works with PostgreSQL and should work on Oracle.
However i'm getting strange errors when inserting records through a parametrized query.
My formatted query looks like this:
"INSERT INTO layer_mapping VALUES (#lm_id,#lm_layer_name,#lm_layer_file);"
Unlike Npgsql which documents how to use the parameters, i could not found how Oracle "prefers" them to be used. I could only find :1, :2, :3, for example.
I do not wanto use sequential parameters, i want to use them in a named way.
Is there a way to do it? Am i doing something wrong?
Thanks
You can use named parameters with ODP.NET like so:
using (var cx=new OracleConnection(connString)){
using(var cmd=cx.CreateCommand()){
cmd.CommandText="Select * from foo_table where bar=:bar";
cmd.BindByName=true;
cmd.Parameters.Add("bar",barValue);
///...
}
}
I made this lib https://github.com/pedro-muniz/ODPNetConnect/blob/master/ODPNetConnect.cs
so you can do parameterized write and read like this:
ODPNetConnect odp = new ODPNetConnect();
if (!String.IsNullOrWhiteSpace(odp.ERROR))
{
throw new Exception(odp.ERROR);
}
//Write:
string sql = #"INSERT INTO TABLE (D1, D2, D3) VALUES (:D1, :D2, :D3)";
Dictionary<string, object> params = new Dictionary<string, object>();
params["D1"] = "D1";
params["D2"] = "D2";
params["D3"] = "D3";
int affectedRows = odp.ParameterizedWrite(sql, params);
if (!String.IsNullOrWhiteSpace(odp.ERROR))
{
throw new Exception(odp.ERROR);
}
//read
string sql = #"SELECT * FROM TABLE WHERE D1 = :D1";
Dictionary<string, object> params = new Dictionary<string, object>();
params["D1"] = "D1";
DataTable dt = odp.ParameterizedRead(sql, params);
if (!String.IsNullOrWhiteSpace(odp.ERROR))
{
throw new Exception(odp.ERROR);
}
Notes: you have to change these lines in ODPNetConnect.cs to set connection string:
static private string devConnectionString = "SET YOUR DEV CONNECTION STRING";
static private string productionConnectionString = "SET YOUR PRODUCTION CONNECTION STRING";
And you need to change line 123 to set environment to dev or prod.
public OracleConnection GetConnection(string env = "dev", bool cacheOn = false)