PG Admin III Slony-I creation scripts not available - windows

I have installed PostgreSQL 9.4 (x86), PG Admin 1.20 and Slony-I 2.2.3 through Stack Builder on a windows platform.
When trying to create a new Slony cluster in pgAdmin, I get the
"Slony-I creation scripts not available" error message.
1) I have setup options Slony-I path to
-->:C:\Program Files (x86)\PostgreSQL\9.4\share
where all the slony1_*.sql scripts are located but I still get the same error.
2) Using procmon I see that PGAdmin tries to access "slony1_funcs.sql" and "slony1_funcs.2.2.0.sql" none of which exist in my \share directory
the installed funcs scripts are "slony1_funcs.2.2.3.sql", "slony1_funcs.v83.2.2.3.sql", slony1_funcs.v84.2.2.3.sql
and the base scripts are "slony1_base.2.2.3.sql", "slony1_base.v83.2.2.3.sql", slony1_base.v84.2.2.3.sql
3) I have tried to rename "slony1_funcs.2.2.3.sql" to "slony1_funcs.2.2.0.sql" and "slony1_funcs.sql" and I now get a new error "Couldn't test for the Slony version. Assuming 1.2.0" and again the
"Slony-I creation scripts not available" error message
Does anyone have an idea how to get this working?

Related

psqlodbc driver not found on heroku despite being in my app directory

I am trying to get RODBC to work on heroku. I have a rails app that calls an R script from RinRuby, which then queries the production database in order to do some analysis. It all works fine on my local Mac, so I thought the best approach was to use the binary compiled on my Max (psqlodbcw.so) into my repo, and reference it in production as well. Unfortunately, when I try to make the connection in production using this connection string:
> library(RODBC)
> dbhandle <- odbcDriverConnect('driver=./psqlodbcw.so;database=nw_server_production;trusted_connection=true;uid=nw_server')
Warning messages:
1: In odbcDriverConnect("driver=./psqlodbcw.so;database=<db_name>;trusted_connection=true;uid=<user>") :
[RODBC] ERROR: state 01000, code 0, message [unixODBC][Driver Manager]Can't open lib './psqlodbcw.so' : file not found
2: In odbcDriverConnect("driver=./psqlodbcw.so;database=<db_name>;trusted_connection=true;uid=<user>?") :
ODBC connection failed
I have seen this error in a similar post online here, but using SQL server instead of postgres. But the accepted answer on that post doesn't explain why the file isn't found, despite being in the app directory. I did follow the same approach and made my own custom buildpack (available here: https://github.com/NovaWulf/r-rodbc-buildpack). I replaced the .so file with the one I compiled on my mac, and simply deleted the .rll file and the code that copies it, since I don't have that file (and hopefully don't need it for psqlodbc?). When I run that buildpack it runs without error on heroku, but then when I reference the .so file copied from the buildpack, I get the same "file not found" error.
Is this happening because the .so file was compiled on the wrong system architecture? I tried compiling psqlodbc on linux, but I do not get a psqlodbcq.so file when I do that (let alone an .rll file). The closest thing I get is a file called libodbcpsqlS.so, which is a setup file, not a driver file.
Could someone please help me understand the best approach to this problem? Why is heroku not seeing the file that is not there? And what is the best solution? Is there a simple way to just download the correct driver file somewhere?
Any help is much appreciated!
Best,
Paul

Unable to run SparkR in Rstudio

I cant use sparkR in Rstudio because im getting some error: Error in sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, :
JVM is not ready after 10 seconds
I have tried to search for the solution but cant find one. Here is how I have tried to setup sparkR:
Sys.setenv(SPARK_HOME="C/Users/alibaba555/Downloads/spark") # The path to your spark installation
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library("SparkR", lib.loc="C/Users/alibaba555/Downloads/spark/R") # The path to the lib folder in the spark location
library(SparkR)
sparkR.session(master="local[*]",sparkConfig=list(spark.driver.memory="2g")*
Now execution starst with a message:
Launching java with spark-submit command
C/Users/alibaba555/Downloads/spark/bin/spark-submit2.cmd
sparkr-shell
C:\Users\ALIBAB~1\AppData\Local\Temp\Rtmp00FFkx\backend_port1b90491e4622
And finally after a few minutes it returns an error message:
Error in sparkR.sparkContext(master, appName, sparkHome,
sparkConfigMap, : JVM is not ready after 10 seconds
Thanks!
It looks like the path to your spark library is wrong. It should be something like: library("SparkR", lib.loc="C/Users/alibaba555/Downloads/spark/R/lib")
I'm not sure if that will fix your problem, but it could help. Also, what versions of Spark/SparkR and Scala are you using? Did you build from source?
What seemed to be causing my issues boiled down to the working directory of our users being a networked mapped drive.
Changing the working directory fixed the issue.
If by chance you are also using databricks-connect make sure that the .databricks-connect file is copied into the %HOME% of each user who will be running Rstudio or set up databricks-connect for each of them.

Automap library issue in Windows7 (with R 3.0.1)

I installed sp and automap libraries to my R 3.0.1 64-bit under Windows 7 (via install.packages command). Installation of them did not display any error and library(sp) works fine however when I try to execute library(automap) I get the following error:
> library(automap)
Error in gzfile(file, "rb") : cannot open the connection
In addition: Warning messages:
1: In read.dcf(file.path(p, "DESCRIPTION"), c("Package", "Version")) :
cannot open compressed file 'C:/Program Files/R/R-3.0.1/library/sp/DESCRIPTION', probable reason 'No such file or directory'
2: In gzfile(file, "rb") :
cannot open compressed file '', probable reason 'Invalid argument'
I looked from the path and indeed there is no DESCRIPTION file (or folder) in that path. However there is just libs folder under which folder x64 and inside it file sp.dll
Any idea what could cause this?
I would definitely try to run R as administrator, both for installing the packages and loading them. This could solve your problem.
This probably has to do with file permissions. When you install the packages as admin in a location where only admin can read/write, running R as a normal user means you do not have the file permissions needed to load the package. Running R as admin will solve this, as admin does have the correct permissions.
Alternatively, you could install your R packages in a location where a normal user has read/write persmissions, e.g. C:/Users/UserName (or something like that, I do not have my windows machine accesible right now).

Magento 1.11.1 EE cli-install crashes - Mage/Reports duplicate install script

I'm trying to install Magento 1.11.1 EE from command line.
The installation crashes at some point throwing the following error:
ERROR: Error in file: "/app/code/core/Mage/Reports/data/reports_setup/data-install-1.6.0.0.php"
A page URL key for specified store already exists.
Looking at the source code I found that there are 3 scripts that are doing the same thing:
/app/code/core/Mage/Reports/sql/reports_setup/mysql4-install-0.7.1.php (initial script)
/app/code/core/Mage/Reports/sql/reports_setup/mysql4-upgrade-0.7.0-0.7.1.php ( ??? basically doing the same thing as the one above but with DROP TABLE IF EXISTS)
/app/code/core/Mage/Reports/data/reports_setup/data-install-1.6.0.0.php
The fix for this would be to remove (by patch of course) one of the scripts (preferably the last one) but I'm trying to understand if this is something that should be there or just a stupid mistake.
You can refere at the link below if you don't have the code opened:
http://www.magentodocs.org/1.7.0.2/d5/dde/_reports_2data_2reports__setup_2data-install-1_86_80_80_8php_source.php

clsql connect oracle database

I am doing some practice with clsql. I want to connect my oracle server hence my connection function is;
(connect '("192.168.2.3" "xe" "username" "password") :database-type :oracle)
when i hit the return, the following error message shows up.
Couldn't load foreign libraries "libclntsh", "oci". (searched *FOREIGN-LIBRARY-SEARCH-PATHS*)
[Condition of type SIMPLE-ERROR]
I have already installed oracle-instantclient11.2-basic-11.2.0.1.0-1.i386.rpm
and define export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client/lib
So, what else should I do to connect the server?
I was playing with oracle lately and found out that all you need is to put path to libclntsh into /etc/ld.conf.d/oracle.conf
My setup was following( redhat,centos - as root): downloaded from oracle
oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm
install via rpm -ivh oracle*.rpm
Create file /etc/ld.so.conf.d/oracle.conf:
/usr/lib/oracle/12.1/client64/lib
then execute ldconfig
Now as clsql-oracle is not in quicklisp, I downloaded and extracted clsql-6.6.2, then
(require "asdf")
(push #P"/opt/jeff/clsql-6.6.2/" asdf:*central-registry*)
(asdf:load-system :clsql-oracle)
(defparameter *some-db* (connect '("127.0.0.1:1521/db1" "SOME_USER_RO" "*******") :database-type :oracle))
and voila, it works
One thing that trips me up with dynamic linking to the Oracle libs (in C/C++ that is), is the fact that the libclntsh.so shared object comes with the version after the so name. So you may need to create a soft link in the same directory, ensuring that the soft link name is just libclntsh.so

Resources