Connection Fail - fails with php 7.3 and 8.0 - tried wamp and xampp - attempting to access a sqlserver database - xampp

here is the entire error:
Connection fail
Array ( [0] => Array ( [0] => IM006 [SQLSTATE] => IM006 [1] => 0 [code] => 0 [2] => [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed [message] => [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed ) [1] => Array ( [0] => 01000 [SQLSTATE] => 01000 [1] => 5701 [code] => 5701 [2] => [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Changed database context to 'sodb'. [message] => [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Changed database context to 'sodb'. ) [2] => Array ( [0] => 01000 [SQLSTATE] => 01000 [1] => 5703 [code] => 5703 [2] => [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Changed language setting to us_english. [message] => [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Changed language setting to us_english. ) )
I have successfully installed the following extensions and they show up under phpinfo()
extension=php_sqlsrv_73_ts_x64.dll
extension=php_sqlsrv_73_nts_x64.dll
extension=php_pdo_sqlsrv_73_nts_x64.dll
extension=php_pdo_sqlsrv_73_ts_x64.dll
extension=php_sqlsrv_73_ts_x86.dll
extension=php_sqlsrv_73_nts_x86.dll
extension=php_pdo_sqlsrv_73_nts_x86.dll
extension=php_pdo_sqlsrv_73_ts_x86.dll
extension=php_sqlsrv_80_ts_x64.dll
extension=php_sqlsrv_80_nts_x64.dll
extension=php_pdo_sqlsrv_80_nts_x64.dll
extension=php_pdo_sqlsrv_80_ts_x64.dll
extension=php_sqlsrv_80_ts_x86.dll
extension=php_sqlsrv_80_nts_x86.dll
extension=php_pdo_sqlsrv_80_nts_x86.dll
extension=php_pdo_sqlsrv_80_ts_x86.dll
Here is my php code that is throwing this error:
$serverName = "***********************";
$connectionInfo = array( "Database"=>"*********", "UID"=>"*********", "PWD"=>"*************");
$maxret = 3;
$conn = false;
do {
$conn = sqlsrv_connect($serverName, $connectionInfo);
if($conn !== false) break;
sleep(2);
} while($maxret-- >= 0);
I have installed ODBC related libraries
I'm working on three different machines, with different IP addresses, all enabled on azure firewall (sql database is on azure). On one of the machines it works, two of the machines generates the error above.
Identical php.ini files on two of the machines and in wamp the listed extensions are the same. On one machine the php script runs, on another it generates an error.

The solution to this problem was to install Microsoft SQL Server Management ...
MSMSS

Related

Dblink to Dameng from Oracle got error: [unixODBC][Driver Manager]Can't open lib '/opt/dmdbms/bin/libdodbc.so' : file not found {01000}

I tried to use a dblink I created in Oracle to access a Dameng database and this error poped up. I tried to isql to Dameng database before, and I could connect to Dameng with isql.
Here is my /etc/odbc.ini:
[dm8]
Driver = DM8 ODBC DRIVER
Description = DM ODBC DSND
SERVER = 10.10.10.73
UID = SYSDBA
PWD = SYSDBA
TCP_PORT = 5236
Here is my /etc/odbcinst.ini file:
[DM8 ODBC DRIVER]
Description = ODBC DRIVER FOR DM8
Driver = /opt/dmdbms/bin/libdodbc.so
threading = 0
I was able to connect to Dameng database via isql successfully.
But when after I set up related Oracle configurations, and connect from Oracle, the error poped up.
Here are some related configurations:
ORACLE_HOME/hs/admin/initdm8.ora:
HS_FDS_CONNECT_INFO = dm8
HS_FDS_TRACE_LEVEL = debug
HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so
#HS_FDS_SHAREABLE_NAME = /usr/lib/libodbc.so
HS_FDS_SUPPORT_STATISTICS=FALSE
#HS_LANGUAGE="SIMPLIFIED CHINESE_CHINA.ZHS16GBK"
HS_LANGUAGE="AMERICAN_AMERICA.ZHS16GBK"
HS_NLS_NCHAR=UCS2
For ORACLE_HOME/network/admin/listener.ora, I added SID_DESC in existing SID_LIST:
(SID_DESC =
(PROGRAM = dg4odbc)
(ORACLE_HOME = /u01/app/oracle/product/19.0.3/dbhome_1)
(SID_NAME = dm8)
(ENVS="LD_LIBRARY_PATH=/u01/app/oracle/product/19.0.3/dbhome_1/lib:/opt/dmdbms/bin")
)
For ORACLE_HOME/network/admin/tnsnames.ora, added:
dm8 =
(DESCRIPTION=
(ADDRESS=
(PROTOCOL=TCP) (HOST=ylzcs) (PORT=1521)
)
(CONNECT_DATA=
(SID=dm8)
)
(HS=OK)
)
The dblink was created and used in PL/SQL:
drop database link dblink_DM8;
create database link dblink_DM8 connect to "SYSDBA" identified by "SYSDBA" using 'dm8';
select * from v$version#dblink_DM8;
I have searched lots of web pages and encountered many assumptions, but they usually do not match my situation:
Assumption: this libdodbc.so file doesn't exist. Answer: the file does exists at that location.
Assumption: Oracle's install user doesn't have access to this file. Answer: the install user oracle was used to install Dameng database, which is the same user used to install Oracle, so libdodbc.so belongs to oracle.
Assumption: You didn't set up LD_LIBRARY_PATH environment variable. Answer: I did set it up in .bash/profile, and the content of LD_LIBRARY_PATH is :/opt/dmdbms/bin:/opt/dmdbms/bin
Assumption: libdodbc.so has some dependent files missing. Answer: there are no not found entries:
[root#ylzcs log]# ldd /opt/dmdbms/bin/libdodbc.so
linux-vdso.so.1 => (0x00007ffd317fd000)
libdmdpi.so => /opt/dmdbms/bin/libdmdpi.so (0x00007f5852901000)
libdmfldr.so => /opt/dmdbms/bin/libdmfldr.so (0x00007f5851ca1000)
libdmelog.so => /opt/dmdbms/bin/libdmelog.so (0x00007f5851a99000)
libdmutl.so => /opt/dmdbms/bin/libdmutl.so (0x00007f5851884000)
libdmclientlex.so => /opt/dmdbms/bin/libdmclientlex.so (0x00007f585164f000)
libdmos.so => /opt/dmdbms/bin/libdmos.so (0x00007f5851420000)
libdmcvt.so => /opt/dmdbms/bin/libdmcvt.so (0x00007f5850d3f000)
libdmstrt.so => /opt/dmdbms/bin/libdmstrt.so (0x00007f5850b29000)
librt.so.1 => /lib64/librt.so.1 (0x00007f5850915000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f58506f9000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f58504f4000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f58501ed000)
libm.so.6 => /lib64/libm.so.6 (0x00007f584feeb000)
libc.so.6 => /lib64/libc.so.6 (0x00007f584fb1d000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f584f907000)
libdmmem.so => /opt/dmdbms/bin/libdmmem.so (0x00007f584f6f5000)
libdmcalc.so => /opt/dmdbms/bin/libdmcalc.so (0x00007f584f474000)
/lib64/ld-linux-x86-64.so.2 (0x0000562c35c01000)
Assumption: libdodbc.so is not compatible with the system. Answer: both the driver and the system are x86_64.
Could there be any other reasons? Any help would be greatly appreciated!

Oracle APEX_WEB_SERVICE MAKE_REST REQUEST raise ORA-29273 and ORA-24247

I'm working on Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production and I need to develop a store procedure that access an API, I have to retrive the endpoint
https://api.my.host:8443/rest/ec/617643
I have set the oracle Wallet and added the certificate like this:
orapki wallet create -wallet /home/oracle/walletapi -pwd walletapi2022 -auto_login
orapki wallet add -wallet /home/oracle/walletapi -trusted_cert -cert /tmp/api.my.host.cer -pwd walletapi2022
I have set the ACE
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'api.my.host'
,lower_port => 8443
,upper_port => 8443
,ace => XS$ACE_TYPE(
privilege_list => XS$NAME_LIST('http')
,principal_name => 'MYUSER'
,principal_type => XS_ACL.ptype_db
)
);
DBMS_NETWORK_ACL_ADMIN.APPEND_WALLET_ACE (
wallet_path => 'file:/home/oracle/walletapi'
,ace => XS$ACE_TYPE(
privilege_list => XS$NAME_LIST('use_client_certificates', 'use_passwords')
,principal_name => 'MYUSER'
,principal_type => XS_ACL.ptype_db
));
Documentarion
In my store proc try this
...
l_clob := APEX_WEB_SERVICE.make_rest_request(
p_url => 'https://api.my.host:8443/rest/ec/617643'
,p_http_method => 'GET'
,p_wallet_path => 'file:/home/oracle/walletapi'
,p_wallet_pwd => 'walletapi2022'
);
...
Documentation
and this error is raised
ORA-29273: HTTP request failed
ORA-06512: at "APEX_210200.WWV_FLOW_WEB_SERVICES", line 1182
ORA-06512: at "APEX_210200.WWV_FLOW_WEB_SERVICES", line 782
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.UTL_HTTP", line 380
ORA-06512: at "SYS.UTL_HTTP", line 1127
ORA-06512: at "APEX_210200.WWV_FLOW_WEB_SERVICES", line 756
ORA-06512: at "APEX_210200.WWV_FLOW_WEB_SERVICES", line 1023
ORA-06512: at "APEX_210200.WWV_FLOW_WEB_SERVICES", line 1371
ORA-06512: at "APEX_210200.WWV_FLOW_WEBSERVICES_API", line 626
ORA-06512: at line 6
Applying the solution with CREATE_ACL and ASSIGN_ACL changes only the value of the ACL column in DBA_NETWORK_ACLS and DBA_NETWORK_ACL_PRIVILEGES views. So I removed ACLs and PRIVs and restart.
Reviewing this question, I noticed that this error is for "APEX_210222" which is one of the schemas created during Apex installation.
I tried
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'api.my.host'
,lower_port => 8443
,upper_port => 8443
,ace => XS$ACE_TYPE(
privilege_list => XS$NAME_LIST('http')
,principal_name => 'APEX_210222'
,principal_type => XS_ACL.ptype_db
)
and the web_service_request is now working correctly.

[IBM][CLI Driver] CLI0125E Function sequence error on RUBY

require "ibm_db"
=> true
db_config = {:host=>"ec2-<>.compute.amazonaws.com", :database=>"SAMPLE", :user=>"user", :password=>"pass", :port=>50000}
db_conn = IBM_DB.connect("DATABASE=#{db_config[:database]};HOSTNAME=#{db_config[:host]};PORT=#{db_config[:port]};PROTOCOL=TCPIP;UID=#{db_config[:user]};PWD=#{db_config[:password]};AUTHENTICATION=SERVER;ClientWrkStnName=tester", "", "")
=> #<IBM_DB::Connection:0x00007fa563fbc8f8>
IBM_DB.autocommit(db_conn)
=> 1
IBM_DB.autocommit(db_conn,0)
=> true
IBM_DB.autocommit(db_conn)
=> 0
sql = "INSERT INTO TTE (name, price) VALUES (?,?)"
stmt = IBM_DB.prepare(db_conn, sql)
#<IBM_DB::Statement:0x00007fa564ce28c0>
value = "string"
IBM_DB.bind_param(stmt,1,value)
(pry):12: warning: Describe Param Failed: [IBM][CLI Driver] CLI0125E Function sequence error. SQLSTATE=HY010 SQLCODE=-99999
=> false
tried another way
param = ["sr", 1]
=> ["sr", 1]
IBM_DB.execute(stmt, param)
(pry):14: warning: Execute Failed due to: [IBM][CLI Driver] CLI0125E Function sequence error. SQLSTATE=HY010 SQLCODE=-99999
=> false
Getting CLI0125E Function sequence error for both ways. Not sure how to resolve it.
I'm on Mac catalina, using ibm_db (3.0.5)
.zschrc
export IBM_DB_HOME=/Applications/dsdriver
export DYLD_LIBRARY_PATH=/Applications/dsdriver/lib
export LD_LIBRARY_PATH=/Applications/dsdriver/lib
There was a mismatch in the table scheme. The field price was not present. Corrected the table schema and the query is working.

Logstash - Got error as An unknown error occurred sending a bulk request to Elasticsearch

I am trying to move SQL Server table record to elasticsearch via logstash. Its basically a synchronization. But I am getting an error from LogStash as unknown error. I have provided my configuration file as well as Error log.
Configuration:
input {
jdbc {
#https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html#plugins-inputs-jdbc-record_last_run
jdbc_connection_string => "jdbc:sqlserver://localhost-serverdb;database=Application;user=dev;password=system23$"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_user => nil
# The path to our downloaded jdbc driver
jdbc_driver_library => "C:\Program Files (x86)\sqljdbc6.2\enu\sqljdbc4-3.0.jar"
# The name of the driver class for SqlServer
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
#executes every minutes.
schedule => "* * * * *"
#executes 0th minute of every day, basically every hour.
#schedule => "0 * * * *"
last_run_metadata_path => "C:\Software\ElasticSearch\logstash-6.4.0\.logstash_jdbc_last_run"
#record_last_run => false
#clean_run => true
# Query for testing purpose
statement => "Select * from tbl_UserDetails"
}
}
output {
elasticsearch {
hosts => ["10.187.144.113:9200"]
index => "tbl_UserDetails"
#document_id is a unique id, this has to be provided during syn, else we may get duplicate entry in ElasticSearch index.
document_id => "%{Login_User_Id}"
}
}
Error Log:
[2018-09-18T21:04:32,171][ERROR][logstash.outputs.elasticsearch]
An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {
:error_message=>"\"\\xF0\" from ASCII-8BIT to UTF-8",
:error_class=>"LogStash::Json::GeneratorError",
:backtrace=>["C:/Software/ElasticSearch/logstash-6.4.0/log
stash-core/lib/logstash/json.rb:27:in `jruby_dump'",
"C:/Software/ElasticSearch/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in `block in bulk'"
, "org/jruby/RubyArray.java:2486:in `map'",
"C:/Software/ElasticSearch/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in `block in bulk'", "org/jruby/RubyArray.java:1734:in `each'", "C:/Software/ElasticSearch/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:117:in `bulk'", "C:/Software/ElasticSearch/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9
.2.0-java/lib/logstash/outputs/elasticsearch/common.rb:275:in `safe_bulk'", "C:/Software/ElasticSearch/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/common.rb:180:in `submit'", "C:/Software/ElasticSearch/logstash-6.4.0/vendor/bundle/jruby/2.3.0
/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/common.rb:148:in `retrying_submit'", "C:/Software/ElasticSearch/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/log
stash/outputs/elasticsearch/common.rb:38:in `multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:114:in `multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:97:in `multi_receive'", "C:/Soft
ware/ElasticSearch/logstash-6.4.0/logstash-core/lib/logstash/pipeline.rb:372:in`block in output_batch'", "org/jruby/RubyHash.java:1343:in `each'", "C:/Software/ElasticSearch/logstash-6.4.0/logstash-core/lib/logstash/pipeline.rb:371:in `output_batch'", "C:/Software/ElasticSearch/logstash-6.4.0/logstash-core/lib/logstash/pipeline.rb:323:in `worker_loop'", "C:/Software/ElasticSearch/logstash-6.4.0/logstash-core/lib/logstash/pipeline.rb:285:in `block in start_workers'"]}
[2018-09-18T21:05:00,140][INFO ][logstash.inputs.jdbc ] (0.008273s) Select *
from tbl_UserDetails
Logstash Version : 6.4.0
Elasticsearch Version :6.3.1
Thanks in advance.
You have a character '\xF0' in database which is causing this issue. This '\xF0' character might be first byte of multibyte character. But since ruby here is trying to decode using ASCII-8BIT, it is considering each byte as character.
You may try using columns_charset to set proper charset. https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html#plugins-inputs-jdbc-columns_charset
The above issue resolved.
Thanks for your support guys.
The change what I did was in the input -> jdbc I added the below two properties
input {
jdbc {
tracking_column => "login_user_id"
use_column_value => true
}
}
and under output->elasticsearch I changed the two properties
output {
elasticsearch {
document_id => "%{login_user_id}"
document_type => "user_details"
}
}
the main take away from here is all the values should be mentioned in lowercase.

Logstash: Error org.postgres.Driver not loaded

I need to get data from a PostgreSQL DB and index it into Elasticsearch.
https://www.elastic.co/blog/logstash-jdbc-input-plugin
When I run /opt/logstash-2.3.3/bin/logstash -v -f es_table.logstash.conf
I receive the following error:
Pipeline aborted due to error
{:exception=>#<LogStash::ConfigurationError: org.postgres.Driver not loaded.
Are you sure you've included the correct jdbc driver in :jdbc_driver_library?>, :backtrace=>["/opt/logstash-2.3.3/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.0.2/lib/logstash/plugin_mixins/jdbc.rb:156:in `prepare_jdbc_connection'", "/opt/logstash-2.3.3/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.0.2/lib/logstash/plugin_mixins/jdbc.rb:148:in `prepare_jdbc_connection'", "/opt/logstash-2.3.3/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.0.2/lib/logstash/inputs/jdbc.rb:167:in `register'", "/opt/logstash-2.3.3/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:330:in `start_inputs'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash-2.3.3/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:329:in `start_inputs'", "/opt/logstash-2.3.3/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:180:in `start_workers'", "/opt/logstash-2.3.3/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash-2.3.3/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/agent.rb:473:in `start_pipeline'"], :level=>:error}
Here is a piece of my Logstash configuration:
input {
jdbc {
jdbc_user => 'user'
jdbc_driver_class => 'org.postgresql.Driver'
jdbc_connection_string => 'jdbc:postgresql://1.1.1.1:5432/db'
lowercase_column_names => false
clean_run => false
jdbc_driver_library => '/usr/share/java/postgresql-jdbc4.jar'
jdbc_password => 'pass'
jdbc_validate_connection => true
jdbc_page_size => 1000
jdbc_paging_enabled => true
statement => 'SELECT * FROM "table"'
type => 'table'
}
...
The jdbc4 driver exists. I tried jdbc3 too without success.
ls /usr/share/java | grep postgresql-jdbc
postgresql-jdbc3-9.2.jar
postgresql-jdbc3.jar
postgresql-jdbc4-9.2.jar
postgresql-jdbc4.jar
The Driver class is inside:
jar tf /usr/share/java/postgresql-jdbc4.jar | grep -i driver
org/postgresql/Driver$1.class
org/postgresql/Driver$ConnectThread.class
org/postgresql/Driver.class
org/postgresql/util/PSQLDriverVersion.class
META-INF/services/java.sql.Driver
The port 5432 is open:
telnet 192.168.109.108 5432
Trying 192.168.109.108...
Connected to 192.168.109.108.
Escape character is '^]'.
Authentication to the DB works.
The problem was that I made a mistake in the driver name.
I wrote jdbc_driver_class => 'org.postgres.Driver'
And the correct name is jdbc_driver_class => 'org.postgresql.Driver'
I resolved this issue by following the workaround suggested in this issue
Reason:
This is a known problem that we have with the modules changes in JDK 9 (Jigsaw). The classloaders have seen some changes and a work around we added before to some driver loading is now failing. The jdbc input has the same failing in JDK 11 (9+). We are working on a fix.
Workaround that worked for me:
An "extreme" work around is to copy the driver file to /logstash-core/lib/jars/ directory. These jar get added to the correct JDK classpath as logstash is started via java.

Resources