Oracle ODBC Issues with Centos7 - oracle

Installation of Oracle ODBC originated from Centos7 was successful. However, the error message im004 unixodbc driver manager driver's sqallochhandle on sql_handle_henv failed was encountered and a solution was not found.
For a list of installation files, see
unixODBC-2.3.11.tar.gz
instantclient-basic-linux.x64-19.18.0.0.0dbru.zip
instantclient-odbc-linux.x64-19.18.0.0.0dbru.zip
First, this is the installation process.
tar -xvzf ./unixODBC-2.3.11.tar.gz -\> ./configure -\> make -\> make install
unzip ./instantclient-basic-linux.x64-19.18.0.0.0dbru.zip
unzip ./instantclient-odbc-linux.x64-19.18.0.0.0dbru.zip
cd instantclient_19_18/
./odbc_update_ini.sh /usr/local /usr/lib64 Oracle
The location and contents of each important installation file.
/usr/local/etc/odbc.ini
\[Oracle226\] Application Attributes = T Attributes = W BatchAutocommitMode = IfAllSuccessful CloseCursor = F DisableDPM = F DisableMTS = T Driver = Oracle EXECSchemaOpt = EXECSyntax = T Failover = T FailoverDelay = 10 FailoverRetryCount = 10 FetchBufferSize = 2000 ForceWCHAR = F Lobs = F Longs = T MetadataIdDefault = F QueryTimeout = T ResultSets = T ServerName = //IP:PORT/DB SQLGetData extensions = F Translation DLL = Translation Option = 0 UserID = DB_ID Password = DB_PW Port = PORT DatabaseCharacterSet = AL32UTF8 #DatabaseCharacterSet = euckr
/usr/local/etc/odbcinst.ini
\[Oracle\] Description = Oracle ODBC driver for Oracle 19 Driver = /usr/local/libsqora.so.19.1 Setup = FileUsage = CPTimeout = CPReuse =
/etc/profile
export ODBCSYSINI=/usr/local/etc export ODBCINI=/usr/local/etc/odbc.ini export LD_LIBRARY_PATH=/usr/lib64:/usr/local/etc
odbcinst -j unixODBC 2.3.11 DRIVERS............: /usr/local/etc/odbcinst.ini SYSTEM DATA SOURCES: /usr/local/etc/odbc.ini FILE DATA SOURCES..: /usr/local/etc/ODBCDataSources USER DATA SOURCES..: /usr/local/etc/odbc.ini SQLULEN Size.......: 8 SQLLEN Size........: 8 SQLSETPOSIROW Size.: 8
In the middle, commands such as source /etc/profile or ldconfig were executed. It's hard to use yum because it's a closed net.
Please help me.
The additional DB you are trying to connect to is an Oracle DB of a different IP.
I tried Googleing, but there was nothing I could do.

Related

When importing a NetCDF into Geoserver Data Store a non obvious error occurs

I've a brand new install of Geoserver 2.22 on Ubuntu 22.04 and installation was smooth. I've added the official NetCDF plugin by unzipping the contents to the WEB-INF/lib/ folder, and it shows up as a type in the data store. Great!
I have a selection of NEtCDFs that can be loaded successfully elsewhere (QGIS, ArcGIS Pro, Python via Xarray), however, when I attempt to create a new data store, choose NetCDF and select the .nc files, I get the following error message:
There was an error trying to connect to store AFDRS_FSE_curing. Do you want to save it anyway?
Original exception error:
Failed to create reader from file:efs/temp_surface.nc and hints Hints: FORCE_LONGITUDE_FIRST_AXIS_ORDER = true EXECUTOR_SERVICE = java.util.concurrent.ThreadPoolExecutor#1242674b[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] FILTER_FACTORY = FilterFactoryImpl STYLE_FACTORY = StyleFactoryImpl FEATURE_FACTORY = org.geotools.feature.LenientFeatureFactoryImpl#6bfaa0a6 FORCE_AXIS_ORDER_HONORING = http GRID_COVERAGE_FACTORY = GridCoverageFactory TILE_ENCODING = null REPOSITORY = org.geoserver.catalog.CatalogRepository#3ef5cfc5 LENIENT_DATUM_SHIFT = true COMPARISON_TOLERANCE = 1.0E-8
What am I missing here? That error message doesn't seem to be highlighting somethign obviously as an error...
The NetCDFs are located in: /usr/share/geoserver/data_dir/data

How do i install snowflake.sqlalchemy in anaconda?

I'm trying to connect to snowflake in python. At present i am an unsuccessful. I have read forums on using the engine way i.e.:
url = URL(
account = 'xxxx',
user = 'xxxx',
password = 'xxxx',
database = 'xxx',
schema = 'xxxx',
warehouse = 'xxx',
role='xxxxx',
authenticator='https://xxxxx.okta.com',
)
engine = create_engine(url)
connection = engine.connect()
query = '''
select * from MYDB.MYSCHEMA.MYTABLE
LIMIT 10;
'''
df = pd.read_sql(query, connection)
but i get the error:
ModuleNotFoundError: No module named 'snowflake.sqlalchemy'
how do i install this module in anaconda? I cannot find how to get round this any other way i have read does not work
This worked for me:
conda install -c conda-forge snowflake-sqlalchemy
dont use import snowflake together with from snowflake.sqlalchemy
use only;
from snowflake.sqlalchemy import URL
Even in an Anaconda environment you can use pip. Did you try pip install snowflake-sqlalchemy?

Why I get this errors when I install Sitecore 9?

I following official documentation - "Sitecore Experience Platform 9.0 Update 1", and after running the powershell script below
...
#install client certificate for xconnect
$certParams = #{
Path = "$PSScriptRoot\xconnect-createcert.json"
CertificateName = "$prefix.xconnect_client"
}
18: Install-SitecoreConfiguration #certParams -Verbose <----------------
#install solr cores for xdb
$solrParams = #{
Path = "$PSScriptRoot\xconnect-solr.json"
SolrUrl = $SolrUrl
SolrRoot = $SolrRoot
SolrService = $SolrService
CorePrefix = $prefix
Name = "SC9"
}
Install-SitecoreConfiguration #solrParams
...
I get errors...I can't understood what I do wrong. Help me please!

Sphinx + Oracle : Data source name not found error

I want to connect to remote oracle database server and index some data from there with sphinx search engine. My OS is ubuntu 16.04 and I havc installed sphinx on it and tested it with local mysql database and everthing was ok (All the data indexed and I could search and results was correct) . I also have installed unixODBC and tested it with isql tool to remote access to oracle database server and every thing was ok, but when I want to index data with indexer command of sphinx this error occure:
sql_connect: [unixODBC][Driver Manager]Data source name not found, and no default driver specified
Here is source block of my sphinx.conf file:
source src2
{
type = odbc
sql_host = hostName
sql_user = user
sql_pass = pass
sql_db = dbname
sql_port = 1521
odbc_dsn = DSN = mydsn; Driver={Oracle};Dbq=hostname:1521/dbname;Uid=user;Pwd=pass
sql_query = \
SELECT tableId, Name \
FROM sampleTable
}
And odbc.ini file:
[mydsn]
Application Attributes = T
Attributes = W
BatchAutocommitMode = IfAllSuccessful
BindAsFLOAT = F
CloseCursor = F
DisableDPM = F
DisableMTS = T
Driver = Oracle
DSN = mydsn
EXECSchemaOpt =
EXECSyntax = T
Failover = T
FailoverDelay = 10
FailoverRetryCount = 10
FetchBufferSize = 64000
ForceWCHAR = F
Lobs = T
Longs = T
MaxLargeData = 0
MetadataIdDefault = F
QueryTimeout = T
ResultSets = T
ServerName = MYDATABASE
SQLGetData extensions = F
Translation DLL =
Translation Option = 0
DisableRULEHint = T
UserID = user
Password = pass
StatementCache=F
CacheBufferSize=20
UseOCIDescribeAny=F
SQLTranslateErrors=F
MaxTokenSize=8192
AggregateSQLType=FLOAT
and odbcinst.ini file :
[Oracle]
Description= ODBC for Oracle
Driver = /opt/oracle/instantclient_12_2/libsqora.so.12.1
Setup =
FileUsage = 1
CPTimeout =
CPReuse = /usr/local/etc/odbcinst.ini
Try
odbc_dsn = DSN=mydsn;
i.e. w/o spaces around = after the DSN and since you have everything else specified in the ini file just the DNS should be enough. You also need only sql_query out of the rest sql_*. Like this:
source src2
{
type = odbc
odbc_dsn = DSN=mydsn;
sql_query = \
SELECT tableId, Name \
FROM sampleTable
}

Flume Tail a File

I am new to Flume-Ng and need help to tail a file. I have a cluster running hadoop with flume running remotely. I communicate to this cluster by using putty. I want to tail a file on my PC and put it on the HDFS in the cluster. I am using the following code to this.
#flume.conf: http source, hdfs sink
# Name the components on this agent
tier1.sources = r1
tier1.sinks = k1
tier1.channels = c1
# Describe/configure the source
tier1.sources.r1.type = exec
tier1.sources.r1.command = tail -F /(Path to file on my PC)
# Describe the sink
tier1.sinks.k1.type = hdfs
tier1.sinks.k1.hdfs.path = /user/ntimbadi/flume/
tier1.sinks.k1.hdfs.filePrefix = events-
tier1.sinks.k1.hdfs.round = true
tier1.sinks.k1.hdfs.roundValue = 10
tier1.sinks.k1.hdfs.roundUnit = minute
# Use a channel which buffers events in memory
tier1.channels.c1.type = memory
tier1.channels.c1.capacity = 1000
tier1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
tier1.sources.r1.channels = c1
tier1.sinks.k1.channel = c1
I believe the mistake is in the source. This kind source does not take the host name or i.p to look for(in this case should be my PC). Could someone just give me a hint as to how to tail a file on my PC to upload it to the remotely located HDFS using flume.
The exec source in your configuration will run on the machine where you start the flume's tier1 agent. If you want to collect data from another machine, you'll need to start a flume agent on that machine too; to sum up you need:
an agent (remote1) running on the remote machine that has an avro source, which will listen for events from collector agents and will act like an aggregator.
an agent (local1) running on your machine (to act like a collector) that has an exec source and sends data to the remote agent via avro sink.
Or alternatively, you can have only one flume agent running on your local machine (having the same configuration you posted) and set the hdfs path as "hdfs://REMOTE_IP/hdfs/path" (though I'm not entirely sure this will work).
edit:
Below are the sample configurations for the 2-agents scenario (they may not work without some modification).
remote1.channels.mem-ch-1.type = memory
remote1.sources.avro-src-1.channels = mem-ch-1
remote1.sources.avro-src-1.type = avro
remote1.sources.avro-src-1.port = 10060
remote1.sources.avro-src-1.bind = 10.88.66.4 /* REPLACE WITH YOUR MACHINE'S EXTERNAL IP */
remote1.sinks.k1.channel = mem-ch-1
remote1.sinks.k1.type = hdfs
remote1.sinks.k1.hdfs.path = /user/ntimbadi/flume/
remote1.sinks.k1.hdfs.filePrefix = events-
remote1.sinks.k1.hdfs.round = true
remote1.sinks.k1.hdfs.roundValue = 10
remote1.sinks.k1.hdfs.roundUnit = minute
remote1.sources = avro-src-1
remote1.sinks = k1
remote1.channels = mem-ch-1
and
local1.channels.mem-ch-1.type = memory
local1.sources.exc-src-1.channels = mem-ch-1
local1.sources.exc-src-1.type = exec
local1.sources.exc-src-1.command = tail -F /(Path to file on my PC)
local1.sinks.avro-snk-1.channel = mem-ch-1
local1.sinks.avro-snk-1.type = avro
local1.sinks.avro-snk-1.hostname = 10.88.66.4 /* REPLACE WITH REMOTE IP */
local1.sinks.avro-snk-1.port = 10060
local1.sources = exc-src-1
local1.sinks = avro-snk-1
local1.channels = mem-ch-1

Resources