Everything is find with Sphnix. I've ony have problem with Turkish characters
Using sphinx-2.2.11 and (2.3 also does not works)
Oracle 11G connection in Sphnix.conf
source db{
Driver={Oracle in OraClient11g_home2};Dbq=ABC-DATABASE-XX:1521/ibbcbs;Uid=ibbcbs;
type = odbc
odbc_dsn = DSN=dsn_ABC; Pwd=ABC;Dbq:ABC
sql_host = XX
sql_user = XX
sql_pass = XX
sql_db = XX
sql_port = 1521
}
Query like:
select
1000000+objectid as GID,
TO_CHAR(NAME) as NAME,
SDO_UTIL.TO_WKTGEOMETRY(SHAPE) as SHAPE_WKT
from MAHALLE
I tried very different charset tables for Turkish in Sphnix.conf
charset_table = A->a, B->b, C->c, U+C7->U+E7, D..G->d..g, U+011E->U+011F, H->h, U+49->U+131, U+130->i, J..O->j..o, U+D6->U+F6, P->p, R..U->r..u, U+15E->U+15F, U+DC->U+FC, X->x, W->w, V->v, Y->y, Z->z, a, b, c, U+E7, d..g, U+11F, h, U+131, i..o, U+F6, p, r..u, U+15F, U+FC, x, w, v, y, z
Original Data: ALANİÇİ
But indexed in Sphinx: ALANIÇI
İ is converted to I somehow. Even If I search same text (ALANIÇI) sphinx does not return any result.
The issue is because oracle client is not connecting to database as UTF8 so Sphinx Search is not getting data correctly.
To fix this issue set the language for oracle client to TURKISH_TURKEY.UTF8.
On Windows, this can be done by editing registry value for NLS_LANG on registry path Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\KEY_OraClient11g_home1. The path may be different based on oracle client you are using.
Same can be achieved by setting environment variable as mentioned in https://docs.oracle.com/cd/E12102_01/books/AnyInstAdm784/AnyInstAdmPreInstall18.html see heading "For Windows:".
For Unix it can be fixed by
setenv NLS_LANG TURKISH_TURKEY.UTF8
After above changes reindex the data and all should be good.
Related
100 becomes 100.0 when I connect to mysql8 via JDBC for float data, but 100 when I use navicat or mysql command
enter image description here
enter image description here
Change the type of f variable from float to int
and
use
f = rs.getInt("f");
It will work
I am trying to connect to Snowflake using R in databricks, my connection works and I can make queries and retrieve data successfully, however my problem is that it can take more than 25 minutes to simply connect, but once connected all my queries are quick thereafter.
I am using the sparklyr function 'spark_read_source', which looks like this:
query<- spark_read_source(
sc = sc,
name = "query_tbl",
memory = FALSE,
overwrite = TRUE,
source = "snowflake",
options = append(sf_options, client_Q)
)
where 'sf_options' are a list of connection parameters which look similar to this;
sf_options <- list(
sfUrl = "https://<my_account>.snowflakecomputing.com",
sfUser = "<my_user>",
sfPassword = "<my_pass>",
sfDatabase = "<my_database>",
sfSchema = "<my_schema>",
sfWarehouse = "<my_warehouse>",
sfRole = "<my_role>"
)
and my query is a string appended to the 'options' arguement e.g.
client_Q <- 'SELECT * FROM <my_database>.<my_schema>.<my_table>'
I can't understand why it is taking so long, if I run the same query from RStudio using a local spark instance and 'dbGetQuery', it is instant.
Is spark_read_source the problem? Is it an issue between Snowflake and Databricks? Or something else? Any help would be great. Thanks.
I am using SQL 'select' to access a db2 table with schemaname.tablename as follows:
select 'colname' from schemaname.tablename
The tablename has 'colname' = SERVER_POOL_NAME for sure . yet I get the following error :
"Invalid parameter: Unknown column name SERVER_POOL_NAME . ERRORCODE=-4460, SQLSTATE=null"
I am using db2 v10.1 FP0 jdbc driver version 3.63.123. JDBC 3.0 spec
The application is run as db2 administrator and also Windows 2008 admin
I saw a discussion about this issue at : db2jcc4.jar Invalid parameter: Unknown column name
But i do not know where the connection parameter 'useJDBC4ColumnNameAndLabelSemantics should be set ( to value =2)
I saw the parameter should appear in com.ibm.db2.jcc.DB2BaseDataSource ( see: http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=%2Fcom.ibm.db2.luw.apdv.java.doc%2Fsrc%2Ftpc%2Fimjcc_r0052607.html)
But i do not find this file on my DB2 installation . maybe it is packed in a .jar file
Any advice ?
There is a link on the page you're referring to, showing you the ways to set properties. Specifically, you can populate a Properties object with desired values and supply it to the getConnection() call:
String url = "jdbc:db2://host:50000/yourdb";
Properties props = new Properties();
props.setProperty("useJDBC4ColumnNameAndLabelSemantics", "2");
// set other required properties
Connection c = DriverManager.getConnection(url, props);
Alternatively, you can embed property name/value pairs in the JDBC URL itself:
String url = "jdbc:db2://host:50000/yourdb:useJDBC4ColumnNameAndLabelSemantics=2;";
// set other required properties
Connection c = DriverManager.getConnection(url);
Note that each name/value pair must be terminated by a semicolon, even the last one.
I have a query as:
SELECT ps_node_id,name
FROM cz_ps_nodes WHERE cz_ps_nodes.ps_node_type=261
START WITH NAME = 'Bundle Rule Repository',cz_ps_nodes.devl_project_id = P_devl_project_id AND cz_ps_nodes.deleted_flag = 0
CONNECT BY PRIOR ps_node_id = parent_id.
This query works.
But if I just remove the name from the select part like:
SELECT ps_node_id
FROM cz_ps_nodes WHERE cz_ps_nodes.ps_node_type = 261
START WITH NAME = 'Bundle Rule Repository',cz_ps_nodes.devl_project_id = P_devl_project_id AND cz_ps_nodes.deleted_flag = 0
CONNECT BY PRIOR ps_node_id = parent_id.
The query just hangs but was working on oracle 10 g and the problem started when we upgraded to oracle 11g.
Could anyone explain why?
Got the issue solved by using : alter session set optimizer_features_enable='10.2.0.4' –
I am running into encoding problems:
db = SQLite3::Database.new "encoding.db"
=> #<SQLite3::Database:0x9b69cbc>
db.encoding
=> #<Encoding:UTF-16LE>
r1 = db.execute("select * from db_log_sink limit 2").first.last
=> "\u7953\u7473\u6D65\u5420\u6D69\u7A65\u6E6F\u3A65\u4520\u5453\u2F20\u4520\u5444\x0A"
r2 = db.execute("select * from db_log_sink limit 2").last.last
=> "????????\u0A3A"
r1.encoding
=> #<Encoding:UTF-16LE>
r2.encoding
=> #<Encoding:UTF-8>
Working at the linux command line with the same file I get:
select * from db_log_sink limit 2;
1|2011-11-16T12:02:15|0|System Timezone: EST / EDT
2|2011-11-16T12:02:15|0|Server Hostnames:
In the browser r2 comes out as Chinese traditional han. r1 comes out as normally formatted text.
About half the records in text columns come out garbled when using the gem/ruby. Everything looks normal at the Linux command line and using SQLiteSpy under Windows.
I have tried ~20 sqlite databases so far and they all show the same behaviour.
I can provide a download link for a db file if needed.
Any help would be much appreciated.
Found workaround:
r2.unpack('U*').pack('v*').force_encoding('utf-8')