Accessing Oracle from AWS Lambda in Python - aws-lambda

I am writing (hopefully) a simply AWS Lambda that will do an RDS Oracle SQL SELECT and email the results. So far I have been using the Lambda Management Console, but all the examples I've run across talk about making a Lambda Deployment Package. So my first question is can I do this from the Lambda Management Console?
Next question I have is what to import for the Oracle DB API? In all the examples I have seen, they download and build a package with pip, but that would then seem to imply using a Deployment Package (see above). Trying to import any of these modules listed in the examples simply give "No module named "...
After writing the above I dug into the boto3 API referrence and came up with:
import boto3
client = boto3.client('rds-data')
But it gives the error: Unknown service: 'rds-data'.
So I'm still lost.
As you can probably tell, I'm new to the Lambda environment. Any suggestions or examples would be greatly appreciated. Thanks.

This is an update of the solution using the 18c Oracle client libraries. If it wasn't for main solution it would have taken me a lot longer to get my code working. This will hopefully help anyone that follows.
(an aside - I tried getting it working with the instantclient_19_3 but went round in circles for a day, and then tried with instantclient_18_5 and it worked)
Files downloaded and used
instantclient-basic-linux.x64-18.5.0.0.0dbru.zip (all files)
cx_Oracle 7.2.2 (https://cx-oracle.readthedocs.io/en/latest/release_notes.html#releasenotes)
libaio.so.1.0.1 (as described in main answer, renamed to libaio.so.1)
This then gave these files in the zip (lambda_function.py is my python source code)
zip contents

Apparently, AWS Lambda is using an older version of boto3, which does not have rds-data yet.
So I'm afraid you will have to create a deployment package containing a more recent version of boto3.
One way to do this, would be to:
Create your lambda handler file (in this case named index.py).
def my_handler(event, context):
client = boto3.client('rds-data')
print(client)
# do stuff
return "hello world"
Add a requirements.txt file in the same folder, which will contain something like:
awscli >= 1.16.118
boto3 >= 1.9.108
Now run this (depending on the setup on your computer, you can use pip instead of pip3) in the directory/folder of your index and requirement file:
pip3 install -r requirements.txt -t .
zip -r somezipname .
Next, upload this zip and change your handler 'entry point' to index.my_handler. The code should now run without errors.

older version of boto3 does not support rds-data.
but you can deploy package with zip folder.
i recommend you to use import cx-oracle
for that install cx-oracle using pip
and upload zip packages. check this
[How can I access Oracle from Python?

After much groaning and gnashing of teeth I have come up with a successful solution.
rds_data (as confirmed by AWS Support) is only supporting Aurora Databases. Wish the AWS documents mentioned this. 8{(>
Thanks to the answers above as well as Jason Landrey for hints as to the solution.
In order to access RDS/Oracle, you need to use cx_Oracle. But wait, there's more.
cx_Oracle is not in the standard Lambda environment, so you need to bring your own. My development environment is on Windows, but the Lambda environment is Linux. So, you need to download and install in your packaging directory I got mine from https://pypi.org/project/cx-Oracle/#files. Install locally with:
pip install cx_Oracle-7.1.2-cp37-cp37m-manylinux1_x86_64.whl -t .
You will see several file appear in . Then you need to find a Linux system and download /lib64/libaio.so.1.0.1 and call it libaio.so.1 in your packaging directory.
And then you need to download both Oracle instant client basic and SDK packages from http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html.
Create a zip file with all these items (including your own Python source). In doing so, rename Oracle instant client files libclntsh.so.11.1 to libclntsh.so and libocci.so.11.1 to libocci.so.
Upload the zip to a S3 bucket as the direct deploy is limited to 66mb and this zip is a bit larger.
Create a Lambda with the appropriate IAM permissions and VPC access, install the package and it should be good to go.
I found that if you don't include all the instant client files you start getting Oracle errors about missing timezone and NLS information.
List of zip contents (for me, YMMV):
7996693 08/24/2013 12:30 libnnz11.so
0 03/11/2019 16:10 cx_Oracle-7.1.1.data/
0 03/11/2019 16:10 cx_Oracle-7.1.1.data/data/
0 03/11/2019 16:10 cx_Oracle-7.1.1.data/data/cx_Oracle-doc/
0 03/11/2019 16:10 cx_Oracle-7.1.1.dist-info/
1325 03/13/2019 12:35 Email.py
1805 02/19/2019 21:11 cx_Oracle-7.1.1.data/data/cx_Oracle-doc/LICENSE.txt
163 02/19/2019 21:11 cx_Oracle-7.1.1.data/data/cx_Oracle-doc/README.txt
851 02/19/2019 21:11 cx_Oracle-7.1.1.dist-info/METADATA
628 02/19/2019 21:12 cx_Oracle-7.1.1.dist-info/RECORD
109 02/19/2019 21:12 cx_Oracle-7.1.1.dist-info/WHEEL
10 02/19/2019 21:11 cx_Oracle-7.1.1.dist-info/top_level.txt
2270301 02/19/2019 21:11 cx_Oracle.cpython-37m-x86_64-linux-gnu.so
2140 03/13/2019 14:21 getSecrets.py
5560 03/12/2019 08:48 libaio.so.1
53865194 08/24/2013 12:30 libclntsh.so
118738042 08/24/2013 12:30 libociei.so
7633 03/13/2019 16:39 scheduleReports.py

Related

use k8s deepcopy-gen generate code failed

I am going to use k8s code-generator to generator deepcopy file and my project under GOPATH/src, but i seems not work and got a problem about GOROOT.
deepcopy-gen command is deepcopy-gen -i k8s_customize_controller/pkg/apis -p k8s_customize_controller/pkg/client -v 10
output blow:
[root#centos72-k8s code-generator]# deepcopy-gen -i k8s_customize_controller/pkg/apis -p k8s_customize_controller/pkg/client -v 10
I0122 02:51:04.609157 17278 parse.go:383] importPackage k8s_customize_controller/pkg/apis
I0122 02:51:04.609359 17278 parse.go:330] addDir k8s_customize_controller/pkg/apis
I0122 02:51:04.730397 17278 parse.go:404] unable to import "k8s_customize_controller/pkg/apis": package k8s_customize_controller/pkg/apis is not in GOROOT (/usr/local/go/src/k8s_customize_controller/pkg/apis)
I0122 02:51:04.730701 17278 main.go:82] Completed successfully.
unable to import "k8s_customize_controller/pkg/apis": package k8s_customize_controller/pkg/apis is not in GOROOT
it seems this problem about GOROOT?
how to resolve this problem?
I had similar issue with an error like this:
Generating deepcopy funcs
F1018 10:51:28.259741 74132 main.go:80] Error: Failed making a parser: unable to add directory "github.com/[my-git-account]/[repo-name]/pkg/apis/v1": No files for pkg "github.com/[my-git-account]/[repo-name]/pkg/apis/v1"
The problem was that I recently moved my github golang projects folders out of the $GOPATH/src folder (which is in my case is ~/go/src), because it worked well on vanilla Ubuntu and WSL Ubuntu, but it has challenges to update packages on MacOS - I moved all my project from the folder ~/go/src/github.com/[my-git-account]
(where the code-generator expected them) to the folder ~/dev/[my-git-account].
Solution I use to fix the error above - to create a symbolic link on my current github projects folder to the $GOPATH/src/github.com:
ln -s ~/dev/[my-git-account] $GOPATH/src/github.com
This way there is a folder $GOPATH/src/github.com/[my-git-account] (provided by the sym-link) with golang projects, where code-generator can find them.
Faced one drawback of this trick - in the IDE navigation to the method can move to the linked source (within SDK package by the link - not to the source code, opened in the IDE).

how to create a custom python environment from yml file *with* downloads of missing packages

I am trying to build a python 3.5 environment that supports an old hddm library. Standard approaches fail due to my/anaconda's apparent inability in ignore (or downgrade) the 10.1 cuda library in favor of an older one that works with hddm.
There is a yml file available that describes a successful environment. But the advertised command
conda env create -file hddm_py35.yml
fails with an error listing all of the packages "not found." Here are the errors.
(base) PS C:\Users\Peter\anaconda3_Sep2020> conda env create --file .\hddm_py35.yml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
odo==0.5.0=py35_1
cffi==1.7.0=py35_0
dill==0.2.5=py35_0
singledispatch==3.4.0.3=py35_0
nb_conda_kernels==2.0.0=py35_0
requests==2.14.2=py35_0
scikit-learn==0.17.1=np111py35_1
wheel==0.29.0=py35_0
jedi==0.9.0=py35_1
widgetsnbextension==1.2.6=py35_0
bitarray==0.8.1=py35_1
theano==1.0.2=py35_0
pytz==2016.6.1=py35_0
pylint==1.5.4=py35_1
ruamel_yaml==0.11.14=py35_0
partd==0.3.6=py35_0
llvmlite==0.13.0=py35_0
multipledispatch==0.4.8=py35_0
pyparsing==2.1.4=py35_0
console_shortcut==0.1.1=py35_1
ipython_genutils==0.1.0=py35_0
patsy==0.4.1=py35_0
pytest==2.9.2=py35_0
heapdict==1.0.0=py35_1
ipywidgets==5.2.2=py35_0
bokeh==0.12.2=py35_0
hdf5==1.8.15.1=2
networkx==1.11=py35_0
backports==1.0=py35_0
pyasn1==0.1.9=py35_0
pyqt==5.6.0=py35h6538335_6
zlib==1.2.11=hbb18732_2
et_xmlfile==1.0.1=py35_0
traitlets==4.3.0=py35_0
colorama==0.3.7=py35_0
argcomplete==1.0.0=py35_1
pywin32==220=py35_1
astropy==1.2.1=np111py35_0
nose==1.3.7=py35_1
freetype==2.8=h0224ed4_1
pkginfo==1.3.2=py35_0
cloudpickle==0.2.1=py35_0
sqlalchemy==1.0.13=py35_0
lazy-object-proxy==1.2.1=py35_0
markupsafe==0.23=py35_2
prompt_toolkit==1.0.3=py35_0
pickleshare==0.7.4=py35_0
itsdangerous==0.24=py35_0
babel==2.3.4=py35_0
click==6.6=py35_0
six==1.10.0=py35_0
libdynd==0.7.2=0
jdcal==1.2=py35_1
pymc==2.3.6=np111py35_2
pathlib2==2.1.0=py35_0
astroid==1.4.7=py35_0
numba==0.28.1=np111py35_0
qtconsole==4.2.1=py35_2
wrapt==1.10.6=py35_0
idna==2.1=py35_0
pytables==3.2.2=np111py35_4
_nb_ext_conf==0.3.0=py35_0
dynd-python==0.7.2=py35_0
numexpr==2.6.1=np111py35_0
werkzeug==0.11.11=py35_0
rope==0.9.4=py35_1
jupyter_client==4.4.0=py35_0
pyzmq==15.4.0=py35_0
python-dateutil==2.5.3=py35_0
beautifulsoup4==4.5.1=py35_0
blaze==0.10.1=py35_0
nbformat==4.1.0=py35_0
nbpresent==3.0.2=py35_0
sip==4.18=py35_0
chest==0.2.3=py35_0
glob2==0.5=py35_0
locket==0.2.0=py35_1
mistune==0.7.3=py35_0
alabaster==0.7.9=py35_0
setuptools==27.2.0=py35_1
win_unicode_console==0.5=py35_0
filelock==2.0.6=py35_0
_license==1.1=py35_1
ipykernel==4.5.0=py35_0
qt==5.6.2=vc14h6f76a7e_12
pep8==1.7.0=py35_0
xlwings==0.10.0=py35_0
spyder==3.0.0=py35_0
xlrd==1.0.0=py35_0
scipy==0.18.1=np111py35_0
dask==0.11.0=py35_0
nbconvert==4.2.0=py35_0
pip==8.1.2=py35_0
mkl==11.3.3=1
nb_anacondacloud==1.2.0=py35_0
cython==0.24.1=py35_0
flask-cors==2.1.2=py35_0
ipython==5.1.0=py35_0
cycler==0.10.0=py35_0
jpeg==9b=he27b436_2
menuinst==1.4.1=py35_0
anaconda==4.2.0=np111py35_0
configobj==5.0.6=py35_0
boto==2.42.0=py35_0
unicodecsv==0.14.1=py35_0
scikit-image==0.12.3=np111py35_1
contextlib2==0.5.3=py35_0
conda-build==3.0.19=py35h15d37ab_0
jinja2==2.8=py35_1
conda-verify==2.0.0=py35_0
get_terminal_size==1.0.0=py35_0
qtpy==1.1.2=py35_0
anaconda-client==1.5.1=py35_0
decorator==4.0.10=py35_0
ply==3.9=py35_0
openpyxl==2.3.2=py35_0
sockjs-tornado==1.0.3=py35_0
pyyaml==3.12=py35_0
snowballstemmer==1.2.1=py35_0
toolz==0.8.0=py35_0
py==1.4.31=py35_0
xlwt==1.1.2=py35_0
clyent==1.2.2=py35_0
bottleneck==1.1.0=np111py35_0
jupyter==1.0.0=py35_3
mkl-service==1.1.2=py35_2
simplegeneric==0.8.1=py35_1
wcwidth==0.1.7=py35_0
h5py==2.6.0=np111py35_2
gevent==1.1.2=py35_0
pycrypto==2.6.1=py35_4
datashape==0.5.2=py35_0
psutil==4.3.1=py35_0
nltk==3.2.1=py35_0
jsonschema==2.5.1=py35_0
notebook==4.2.3=py35_0
pycparser==2.14=py35_1
xlsxwriter==0.9.3=py35_0
jupyter_core==4.2.0=py35_0
qtawesome==0.3.3=py35_0
fastcache==1.0.2=py35_1
jupyter_console==5.0.0=py35_0
tornado==4.4.1=py35_0
path.py==8.2.1=py35_0
pyflakes==1.3.0=py35_0
sympy==1.0=py35_0
pandas==0.20.1=np111py35_0
pygments==2.1.3=py35_0
anaconda-clean==1.0.0=py35_0
mpmath==0.19=py35_1
comtypes==1.1.2=py35_0
cryptography==1.5=py35_0
chardet==3.0.4=py35_0
entrypoints==0.2.2=py35_0
sphinx==1.4.6=py35_0
greenlet==0.4.10=py35_0
anaconda-navigator==1.3.1=py35_0
flask==0.11.1=py35_0
pyopenssl==16.2.0=py35_0
lxml==3.6.4=py35_0
icu==58.2=h3fcc66b_1
docutils==0.12=py35_2
statsmodels==0.6.1=np111py35_1
nb_conda==2.0.0=py35_0
imagesize==0.7.1=py35_0
(base) PS C:\Users\Peter\anaconda3_Sep2020>
The failure occurred within seconds. I get the feeling that conda didn't even try to look for these packages!?!?
Am I supposed to download these packages, put them somewhere, and then tell conda to find them on my hard drive?
Is there a flag that tells conda to do its usually find-and-load for all "missing" packages -- but only in the environment I'm describing? In my base environment (3.8) I don't wish to downgrade.
Should make a new 3.5 environment and then work through the list one-by-one and uninstall/remove/downgrade each package by hand?
Meta question: This must be a FAQ, and yet I'm not able to google for the answer. That usually means googling for "conda install environment from yaml file" doesn't contain the appropriate vocabulary for, well, trying to induce conda to install an environment from a yaml file. What question should I have asked?
1) Am I supposed to download these packages, put them somewhere, and then
tell conda to find them on my hard drive?
Not necessary. But searching for the versions on anaconda.org helps identify channels for one-by-one manual download.
2) Is there a flag that tells conda to do its usually find-and-load for all
"missing" packages -- but only in the environment I'm describing? In my base
environment (3.8) I don't wish to downgrade.
There is no evidence that conda will automatically download files listed in a yaml file that are missing in the present environment.
3) Should make a new 3.5 environment and then work through the list one-by-
one and uninstall/remove/downgrade each package by hand?
Yes.
4) Meta question: This must be a FAQ, and yet I'm not able to google for the
answer. That usually means googling for "conda install environment from yaml
file" doesn't contain the appropriate vocabulary for, well, trying to induce
conda to install an environment from a yaml file. What question should I have
asked?
There is no evidence that yaml files are anything other than lists of version of packages in an environment. They cannot be used to make new environments (unless all of the components are already present in the host environment, maybe) so their value is largely annotative. Evidently.
For the case of making an environment for hddm in 2020, well, don't try. Cuda support will work against you. There is a hddm host at https://colab.research.google.com/ that is properly configured (without cuda disruption) so that you can use it to kick tires, etc. Getting hddm to work in any other context probably requires dedicated hardware so that the cuda driver can be manipulated for this application only and not break any other applications in the process.

Cant create new versions of cloud code in parse on buddy

I m using parse on buddy as developement environment. my problem is that current version cloud code is 41,but when I create new version,new version is not created.can you guys point me what is the problem.
I was experiencing the same issue. The following log (note the version 6 being created successfully but then not listed):
D:\MyCloudCode>parse-on-buddy -c 6
Listing application versions...
Walking local public directory subtree...
Listing existing hash blobs...
11 public assets already synchronized!
Uploading cloud code...
Uploading name ? hash mapping...
Setting active version...
All done!
D:\MyCloudCode>parse-on-buddy -l
Listing application versions...
1 2 3 4 5
After upgrading to the latest parse-on-buddy CLI the upload indicated a syntax error in my .js files which was not shown with the old CLI. That being fixed the upload of new versions now works like a charm.

Chilkat ftp.SyncLocalDir with open files?

I’m having an issue with ftp.SyncLocalDir when I have an open file on the local directory.
I’m using the example from http://www.example-code.com/vbnet/ftp_syncLocalTree.asp with a few minor changes. It has been working fine for a few days and then has stopped working.
I’ve found that one of the files is open on the local directory. Looking through the http://chilkatforum.com/ forum I see that one of the answers stated that
“Chilkat will detect errors that are likely permission/access errors and will continue with the remainder of the download.”
This is not happening for me. Looking at the last error text it states that the file is used by another process. Not other files get synchronized.
Is the something else I need to add to the code to force it to continue after the error?
Below is the last error text.
Thanks,
Steve
ChilkatLog:
SyncLocalDir:
DllDate: Dec 5 2014
ChilkatVersion: 9.5.0.46
UnlockPrefix: *********
Username: *********
Architecture: Little Endian; 32-bit
Language: .NET 4.0
VerboseLogging: 0
commandCharset: ansi
dirListingCharset: ansi
localDirPath: Q:\TEST
mode: 2
ProgressMonitoring:
enabled: yes
heartbeatMs: 0
sendBufferSize: 65536
--ProgressMonitoring
downloadDir:
getFile2:
localFilename: Q:\TEST/LINE_6 _13.csv
Replacing existing local file
openForReadWriteWin32:
Failed to open file (2)
localFilePath: Q:\TEST\LINE_6 _13.csv
currentWorkingDirectory: H:\Code In Progress\LLS\Gen 3 Test And Crimp
w-network\VB Code\trunk\FTP Syncronize\bin\Debug
osErrorInfo: The process cannot access the file because it is being us
ed by another process.
localWindowsFilePath: Q:\TEST\Line 6\LINE_6 _13.csv
--openForReadWriteWin32
--getFile2
Failed to download file
failedFilename: /LINE_6 _13.csv
--downloadDir
Failed.
--SyncLocalDir
--ChilkatLog
Please try this new build for the .NET 4.0 Framework:
32-bit Download: http://www.chilkatsoft.com/download/preRelease/ChilkatDotNet4-9.5.0-win32.zip
64-bit Download: http://www.chilkatsoft.com/download/preRelease/ChilkatDotNet4-9.5.0-x64.zip
The feature for continuing past permission/access issues had to do with issues on the remote server as opposed to the local filesystem. This new build should now also do the same for local permission errors. It will be noted in the release notes for Chilkat version 9.5.0.47 when released (soon).
If you have trouble, please post the LastErrorText using this new build.

clsql connect oracle database

I am doing some practice with clsql. I want to connect my oracle server hence my connection function is;
(connect '("192.168.2.3" "xe" "username" "password") :database-type :oracle)
when i hit the return, the following error message shows up.
Couldn't load foreign libraries "libclntsh", "oci". (searched *FOREIGN-LIBRARY-SEARCH-PATHS*)
[Condition of type SIMPLE-ERROR]
I have already installed oracle-instantclient11.2-basic-11.2.0.1.0-1.i386.rpm
and define export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client/lib
So, what else should I do to connect the server?
I was playing with oracle lately and found out that all you need is to put path to libclntsh into /etc/ld.conf.d/oracle.conf
My setup was following( redhat,centos - as root): downloaded from oracle
oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm
install via rpm -ivh oracle*.rpm
Create file /etc/ld.so.conf.d/oracle.conf:
/usr/lib/oracle/12.1/client64/lib
then execute ldconfig
Now as clsql-oracle is not in quicklisp, I downloaded and extracted clsql-6.6.2, then
(require "asdf")
(push #P"/opt/jeff/clsql-6.6.2/" asdf:*central-registry*)
(asdf:load-system :clsql-oracle)
(defparameter *some-db* (connect '("127.0.0.1:1521/db1" "SOME_USER_RO" "*******") :database-type :oracle))
and voila, it works
One thing that trips me up with dynamic linking to the Oracle libs (in C/C++ that is), is the fact that the libclntsh.so shared object comes with the version after the so name. So you may need to create a soft link in the same directory, ensuring that the soft link name is just libclntsh.so

Resources