unable to use RODM to connect to Oracle database from R - oracle

I am trying to connect to an Oracle database from R.
I used RODM_open_dbms_connection(dsn, uid = "", pwd = ""), but it doesn't work. I am not sure what kind of the error it is.
Here is the error screen from R.
> library(RODM) Loading required package: RODBC DB<-
> RODM_open_dbms_connection(dsn="****",uid="****", pwd="****") Error in
> typesR2DBMS[[driver]] <<- value[c("double", "integer", "character", :
> cannot change value of locked binding for 'typesR2DBMS'

Have you tried ROracle? After you get the instant client installed on your machine, connecting and fetching records from R looks like this:
library(ROracle)
con <- dbConnect(dbDriver("Oracle"), username="username", password="password", dbname = "dbname")
res <- dbSendQuery(con, "select * from schema.table")
dt <- data.table(fetch(res, n=-1))

I explored the RODM_open_dbms_connection. I commented out the part setSqlTYpeInfo(). After that I didn't receive that error.
Install RODM package from source then only you can edit the package.

Related

I can't run a correlation with ggcorrmat

I am getting this error when running a correlation matrix with the R package ggstatsplo
ggcorrmat(data = d, type = "nonparametric", p.adjust.method = "hochberg",pch = "")
Error: 'data_to_numeric' is not an exported object from 'namespace:datawizard'
Somebody could help me.
I expected to have the script to run normally as I have used it before (around July) without any errors.
Is there anything that has changed about the ggstatsplot package?

Very slow connection to Snowflake from Databricks

I am trying to connect to Snowflake using R in databricks, my connection works and I can make queries and retrieve data successfully, however my problem is that it can take more than 25 minutes to simply connect, but once connected all my queries are quick thereafter.
I am using the sparklyr function 'spark_read_source', which looks like this:
query<- spark_read_source(
sc = sc,
name = "query_tbl",
memory = FALSE,
overwrite = TRUE,
source = "snowflake",
options = append(sf_options, client_Q)
)
where 'sf_options' are a list of connection parameters which look similar to this;
sf_options <- list(
sfUrl = "https://<my_account>.snowflakecomputing.com",
sfUser = "<my_user>",
sfPassword = "<my_pass>",
sfDatabase = "<my_database>",
sfSchema = "<my_schema>",
sfWarehouse = "<my_warehouse>",
sfRole = "<my_role>"
)
and my query is a string appended to the 'options' arguement e.g.
client_Q <- 'SELECT * FROM <my_database>.<my_schema>.<my_table>'
I can't understand why it is taking so long, if I run the same query from RStudio using a local spark instance and 'dbGetQuery', it is instant.
Is spark_read_source the problem? Is it an issue between Snowflake and Databricks? Or something else? Any help would be great. Thanks.

Rstudio fails to connect to SSL enabled postgresql server on windows machine

I am facing a problem while connecting with a SSL enabled PostgreSQL server from Windows. I am getting the following error:
Error :
Error in postgresqlNewConnection(drv, …) :
RS-DBI driver: (could not connect ip:80 on dbname "all": sslmode value "require" invalid when SSL support is not compiled in.
Commands I have used :
install.packages(“RPostgreSQL”)
install.packages(“rstudioapi”)
require(“RPostgreSQL”)
require(“rstudioapi”)
drv <- dbDriver("PostgreSQL")
pg_dsn = paste0(
'dbname=', "all", ' ',
'sslmode=require')
con <- dbConnect(drv,
dbname = pg_dsn,
host = "ip",
port = 80,
user = "abcd",
password = rstudioapi::askForPassword("Database password"))
You need to use a PostgreSQL client shared library (libpq.dll) that was built with SSL support.

sqlcmd runs fine and cmd line but will not produce output as a exec cmd step in a SQL Agent job

I am pulling my hair out and have tried every posting suggest.
I have a tsql script called C:\DBAReports\testsql.sql. If I go to a command logged prompt on my server, and run: sqlcmd -S localhost -i C:\DBAReports\testsql.sql -o C:\DBAReports\testout.txt
But if I create an new agent job with 1 step of type Operating system (CmdExec) to run as a SQL Server Agent Service Account, On Success Quit the job reporting success and on Failure Quit the job reporting failure. with the owner my same admin windows login as when I run the cmd prompt, right click on the agent job and start at step 1, I get the job succeeded (Job was invoked by my windows login), and the Step 1 is Executed as user is-sql "The step did not generate any output. Process Exit Code 0. The step was successful".
But it doesn't write the output file.
Any ideas?
The reason I want to do this is I am getting periodic Error: 18057, Severity: 20, State: 2 Failed to set up execution content in my sql server log. What I hope to do is kick off this job when this occurs to try and find out what are the SPIDs, status, SQL running, etc and write it to an output file.
My testsql.sql script contains.
SELECT
SPID = er.session_id
,STATUS = ses.STATUS
,[Login] = ses.login_name
,Host = ses.host_name
,BlkBy = er.blocking_session_id
,DBName = DB_Name(er.database_id)
,CommandType = er.command
,SQLStatement = st.text
,ObjectName = OBJECT_NAME(st.objectid)
,ElapsedMS = er.total_elapsed_time
,CPUTime = er.cpu_time
,IOReads = er.logical_reads + er.reads
,IOWrites = er.writes
,LastWaitType = er.last_wait_type
,StartTime = er.start_time
,Protocol = con.net_transport
,ConnectionWrites = con.num_writes
,ConnectionReads = con.num_reads
,ClientAddress = con.client_net_address
,Authentication = con.auth_scheme
FROM sys.dm_exec_requests er
OUTER APPLY sys.dm_exec_sql_text(er.sql_handle) st
LEFT JOIN sys.dm_exec_sessions ses
ON ses.session_id = er.session_id
LEFT JOIN sys.dm_exec_connections con
ON con.session_id = ses.session_id
Thanks in advance for any help. I have tried so many suggestions, and either get syntax errors on the command, and when I don't get any syntax error on the sqlcmd, it just generates no output.
An alternate way - try modifying the job step as Type: T-SQL script and the command as:
EXEC master..xp_CMDShell 'C:\Work\temp\test3.bat'
Please replace the bat file path with yours.

Connecting to Oracle Database in Haskell using HDBC

I was trying to work with Oracle Database from Haskell and have faced with such problem.
So, there is this code.
module Main where
import Database.HDBC
import Database.HDBC.ODBC
main :: IO ()
main = do
let connectionString = "Driver={Microsoft ODBC for Oracle};Server=127.0.0.1;Uid=valera;Pwd=2562525;"
let ioconn = connectODBC connectionString
conn <- ioconn
vals <- quickQuery conn "SELECT * FROM PERSONS_TEST" []
print vals
return ()
Pretty simple, huh? But that won't work. With this connection string the error is
*** Exception: SqlError {seState = "[\"HY090\"]", seNativeError = -1, seErrorMsg = "sqlGetInfo SQL_TXN_CAPABLE: [\"0: [Microsoft][ODBC driver for Oracle]\\65533...
and then 65333 repeats many times. And with this
Provider=msdaora;Data Source=127.0.0.1;User Id=valera;Password=2562525;
the error is
*** Exception: SqlError {seState = "[\"IM002\"]", seNativeError = -1, seErrorMsg = "connectODBC/sqlDriverConnect: [\"0: [Microsoft][\\65533...
and 65333 repeats again till the end
I suppose, that the problem is in connection string, but I had tried a whole bunch of them (I've used http://www.connectionstrings.com/)
I'm using Haskell Platform 2011.4.0.0, GHC 7.0.4, Oracle Database XE 11.2 on Windows 7 64-bit. Microsoft MDAC SDK installed.
\65533 and so on is the symbols of ODBC driver error message string in your locale (RU?). I find the best way so on to develop in english locale system, thus error messages in ghci console shown in english language and can be read.

Resources