Does RethinkDB have an integrated command line client like psql?
I have seen the web admin console, but it's very easy to interact with bash script or manually from a ssh on trouble...
I have seen:
https://github.com/stiang/recli
And also an issue is related to this question:
https://github.com/rethinkdb/rethinkdb/issues/189
but the solution is unclear, rethinkdb repl doesn't exist.
At the moment, RethinkDB does not have an official "shell" or "query CLI". As you've found, we do have the Data Explorer in the WebUI which will allow you to do anything you can do with a driver.
What I usually do, since I have RethinkDB running on most of my machines, I just added two lines to my ipython configuration to load the rethinkdb driver on startup and make a connection to my local database.
This is just a couple steps:
ipython profile create creates ~/.ipython/profile_default/ipython_config.py
In this config file edit c.InteractiveShellApp.exec_lines (line 35) like so:
c.InteractiveShellApp.exec_lines = [
"import rethinkdb as r",
"conn = r.connect()"
]
Now when you begin ipython you'll see that `conn is already an established connection to RethinkDB.
$ ipython3
Python 3.5.2 (default, Jul 28 2016, 21:28:00)
Type "copyright", "credits" or "license" for more information.
IPython 5.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: conn
Out[1]: <rethinkdb.net.DefaultConnection at 0x109d8c4e0>
In [2]: r.db_list().run(conn)
Out[2]:
['asyncio',
'example',
...]
This makes turning ipython into your "ReQL-cli" a bit more convenient.
Related
I'm trying to generate documentation of my HL7 FHIR profiles using IG Publisher (publisher.jar). I'm running it on command line on macOS. I've uploaded the IG resource on Simplifier and it validates with no errors.
The problem is that i'm getting java.lang.NullPointerException. Full output below:
java -jar publisher.jar -ig fsh-generated/resources/ImplementationGuide-nfz.pozplus.json -tx n/a
FHIR IG Publisher Version 1.1.120 (Git# 210e48f945ad). Built 2022-05-13T15:20:39.709Z (8 days old)
Detected Java version: 11.0.10 from /Library/Java/JavaVirtualMachines/adoptopenjdk-11.jdk/Contents/Home on Mac OS X/x86_64 (64bit). 2048MB available
dir = /Users/marcingrudzien/Library/CloudStorage/OneDrive-Osobisty/Praca/iEHReu/NFZ/POZ-Plus/FHIR/sushi-test/NfzTest, path = /Users/marcingrudzien/.gem/ruby/3.1.2/bin:/Users/marcingrudzien/.rubies/ruby-3.1.2/lib/ruby/gems/3.1.0/bin:/Users/marcingrudzien/.rubies/ruby-3.1.2/bin:/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/share/dotnet:~/.dotnet/tools:/Library/Frameworks/Mono.framework/Versions/Current/Commands:/Applications/Postgres.app/Contents/Versions/latest/bin
Parameters: -ig fsh-generated/resources/ImplementationGuide-nfz.pozplus.json -tx n/a
Start Clock # sobota, 21 maja 2022 20:56:04 czas środkowoeuropejski letni (2022-05-21T20:56:04+02:00)
API keys loaded from /Users/marcingrudzien/fhir-api-keys.ini (00:00.0027)
Package Cache: /Users/marcingrudzien/.fhir/packages (00:00.0030)
Load Configuration from /Users/marcingrudzien/Library/CloudStorage/OneDrive-Osobisty/Praca/iEHReu/NFZ/POZ-Plus/FHIR/sushi-test/NfzTest/fsh-generated/resources/ImplementationGuide-nfz.pozplus.json (00:00.0053)
Root directory: /Users/marcingrudzien/Library/CloudStorage/OneDrive-Osobisty/Praca/iEHReu/NFZ/POZ-Plus/FHIR/sushi-test/NfzTest/fsh-generated/resources (00:00.0084)
Publishing Content Failed: null (00:00.0086)
(00:00.0087)
Use -? to get command line help (00:00.0087)
(00:00.0087)
Stack Dump (for debugging): (00:00.0088)
java.lang.NullPointerException
at org.hl7.fhir.igtools.publisher.Publisher.initializeFromJson(Publisher.java:2947)
at org.hl7.fhir.igtools.publisher.Publisher.initialize(Publisher.java:2168)
at org.hl7.fhir.igtools.publisher.Publisher.execute(Publisher.java:854)
at org.hl7.fhir.igtools.publisher.Publisher.main(Publisher.java:10144)
I hope someone here could help me. I could provide any additional information, if needed.
From the stack dump, you are missing a "path" property in your json config file, but I highly recommend that you switch to using the new set up, a ini file that points to a template and a IG resource. Use something like the sample-ig as a starter (https://github.com/FHIR/sample-ig) or see the how-to (https://build.fhir.org/ig/FHIR/ig-guidance/index.html)
I am trying to build a python 3.5 environment that supports an old hddm library. Standard approaches fail due to my/anaconda's apparent inability in ignore (or downgrade) the 10.1 cuda library in favor of an older one that works with hddm.
There is a yml file available that describes a successful environment. But the advertised command
conda env create -file hddm_py35.yml
fails with an error listing all of the packages "not found." Here are the errors.
(base) PS C:\Users\Peter\anaconda3_Sep2020> conda env create --file .\hddm_py35.yml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
odo==0.5.0=py35_1
cffi==1.7.0=py35_0
dill==0.2.5=py35_0
singledispatch==3.4.0.3=py35_0
nb_conda_kernels==2.0.0=py35_0
requests==2.14.2=py35_0
scikit-learn==0.17.1=np111py35_1
wheel==0.29.0=py35_0
jedi==0.9.0=py35_1
widgetsnbextension==1.2.6=py35_0
bitarray==0.8.1=py35_1
theano==1.0.2=py35_0
pytz==2016.6.1=py35_0
pylint==1.5.4=py35_1
ruamel_yaml==0.11.14=py35_0
partd==0.3.6=py35_0
llvmlite==0.13.0=py35_0
multipledispatch==0.4.8=py35_0
pyparsing==2.1.4=py35_0
console_shortcut==0.1.1=py35_1
ipython_genutils==0.1.0=py35_0
patsy==0.4.1=py35_0
pytest==2.9.2=py35_0
heapdict==1.0.0=py35_1
ipywidgets==5.2.2=py35_0
bokeh==0.12.2=py35_0
hdf5==1.8.15.1=2
networkx==1.11=py35_0
backports==1.0=py35_0
pyasn1==0.1.9=py35_0
pyqt==5.6.0=py35h6538335_6
zlib==1.2.11=hbb18732_2
et_xmlfile==1.0.1=py35_0
traitlets==4.3.0=py35_0
colorama==0.3.7=py35_0
argcomplete==1.0.0=py35_1
pywin32==220=py35_1
astropy==1.2.1=np111py35_0
nose==1.3.7=py35_1
freetype==2.8=h0224ed4_1
pkginfo==1.3.2=py35_0
cloudpickle==0.2.1=py35_0
sqlalchemy==1.0.13=py35_0
lazy-object-proxy==1.2.1=py35_0
markupsafe==0.23=py35_2
prompt_toolkit==1.0.3=py35_0
pickleshare==0.7.4=py35_0
itsdangerous==0.24=py35_0
babel==2.3.4=py35_0
click==6.6=py35_0
six==1.10.0=py35_0
libdynd==0.7.2=0
jdcal==1.2=py35_1
pymc==2.3.6=np111py35_2
pathlib2==2.1.0=py35_0
astroid==1.4.7=py35_0
numba==0.28.1=np111py35_0
qtconsole==4.2.1=py35_2
wrapt==1.10.6=py35_0
idna==2.1=py35_0
pytables==3.2.2=np111py35_4
_nb_ext_conf==0.3.0=py35_0
dynd-python==0.7.2=py35_0
numexpr==2.6.1=np111py35_0
werkzeug==0.11.11=py35_0
rope==0.9.4=py35_1
jupyter_client==4.4.0=py35_0
pyzmq==15.4.0=py35_0
python-dateutil==2.5.3=py35_0
beautifulsoup4==4.5.1=py35_0
blaze==0.10.1=py35_0
nbformat==4.1.0=py35_0
nbpresent==3.0.2=py35_0
sip==4.18=py35_0
chest==0.2.3=py35_0
glob2==0.5=py35_0
locket==0.2.0=py35_1
mistune==0.7.3=py35_0
alabaster==0.7.9=py35_0
setuptools==27.2.0=py35_1
win_unicode_console==0.5=py35_0
filelock==2.0.6=py35_0
_license==1.1=py35_1
ipykernel==4.5.0=py35_0
qt==5.6.2=vc14h6f76a7e_12
pep8==1.7.0=py35_0
xlwings==0.10.0=py35_0
spyder==3.0.0=py35_0
xlrd==1.0.0=py35_0
scipy==0.18.1=np111py35_0
dask==0.11.0=py35_0
nbconvert==4.2.0=py35_0
pip==8.1.2=py35_0
mkl==11.3.3=1
nb_anacondacloud==1.2.0=py35_0
cython==0.24.1=py35_0
flask-cors==2.1.2=py35_0
ipython==5.1.0=py35_0
cycler==0.10.0=py35_0
jpeg==9b=he27b436_2
menuinst==1.4.1=py35_0
anaconda==4.2.0=np111py35_0
configobj==5.0.6=py35_0
boto==2.42.0=py35_0
unicodecsv==0.14.1=py35_0
scikit-image==0.12.3=np111py35_1
contextlib2==0.5.3=py35_0
conda-build==3.0.19=py35h15d37ab_0
jinja2==2.8=py35_1
conda-verify==2.0.0=py35_0
get_terminal_size==1.0.0=py35_0
qtpy==1.1.2=py35_0
anaconda-client==1.5.1=py35_0
decorator==4.0.10=py35_0
ply==3.9=py35_0
openpyxl==2.3.2=py35_0
sockjs-tornado==1.0.3=py35_0
pyyaml==3.12=py35_0
snowballstemmer==1.2.1=py35_0
toolz==0.8.0=py35_0
py==1.4.31=py35_0
xlwt==1.1.2=py35_0
clyent==1.2.2=py35_0
bottleneck==1.1.0=np111py35_0
jupyter==1.0.0=py35_3
mkl-service==1.1.2=py35_2
simplegeneric==0.8.1=py35_1
wcwidth==0.1.7=py35_0
h5py==2.6.0=np111py35_2
gevent==1.1.2=py35_0
pycrypto==2.6.1=py35_4
datashape==0.5.2=py35_0
psutil==4.3.1=py35_0
nltk==3.2.1=py35_0
jsonschema==2.5.1=py35_0
notebook==4.2.3=py35_0
pycparser==2.14=py35_1
xlsxwriter==0.9.3=py35_0
jupyter_core==4.2.0=py35_0
qtawesome==0.3.3=py35_0
fastcache==1.0.2=py35_1
jupyter_console==5.0.0=py35_0
tornado==4.4.1=py35_0
path.py==8.2.1=py35_0
pyflakes==1.3.0=py35_0
sympy==1.0=py35_0
pandas==0.20.1=np111py35_0
pygments==2.1.3=py35_0
anaconda-clean==1.0.0=py35_0
mpmath==0.19=py35_1
comtypes==1.1.2=py35_0
cryptography==1.5=py35_0
chardet==3.0.4=py35_0
entrypoints==0.2.2=py35_0
sphinx==1.4.6=py35_0
greenlet==0.4.10=py35_0
anaconda-navigator==1.3.1=py35_0
flask==0.11.1=py35_0
pyopenssl==16.2.0=py35_0
lxml==3.6.4=py35_0
icu==58.2=h3fcc66b_1
docutils==0.12=py35_2
statsmodels==0.6.1=np111py35_1
nb_conda==2.0.0=py35_0
imagesize==0.7.1=py35_0
(base) PS C:\Users\Peter\anaconda3_Sep2020>
The failure occurred within seconds. I get the feeling that conda didn't even try to look for these packages!?!?
Am I supposed to download these packages, put them somewhere, and then tell conda to find them on my hard drive?
Is there a flag that tells conda to do its usually find-and-load for all "missing" packages -- but only in the environment I'm describing? In my base environment (3.8) I don't wish to downgrade.
Should make a new 3.5 environment and then work through the list one-by-one and uninstall/remove/downgrade each package by hand?
Meta question: This must be a FAQ, and yet I'm not able to google for the answer. That usually means googling for "conda install environment from yaml file" doesn't contain the appropriate vocabulary for, well, trying to induce conda to install an environment from a yaml file. What question should I have asked?
1) Am I supposed to download these packages, put them somewhere, and then
tell conda to find them on my hard drive?
Not necessary. But searching for the versions on anaconda.org helps identify channels for one-by-one manual download.
2) Is there a flag that tells conda to do its usually find-and-load for all
"missing" packages -- but only in the environment I'm describing? In my base
environment (3.8) I don't wish to downgrade.
There is no evidence that conda will automatically download files listed in a yaml file that are missing in the present environment.
3) Should make a new 3.5 environment and then work through the list one-by-
one and uninstall/remove/downgrade each package by hand?
Yes.
4) Meta question: This must be a FAQ, and yet I'm not able to google for the
answer. That usually means googling for "conda install environment from yaml
file" doesn't contain the appropriate vocabulary for, well, trying to induce
conda to install an environment from a yaml file. What question should I have
asked?
There is no evidence that yaml files are anything other than lists of version of packages in an environment. They cannot be used to make new environments (unless all of the components are already present in the host environment, maybe) so their value is largely annotative. Evidently.
For the case of making an environment for hddm in 2020, well, don't try. Cuda support will work against you. There is a hddm host at https://colab.research.google.com/ that is properly configured (without cuda disruption) so that you can use it to kick tires, etc. Getting hddm to work in any other context probably requires dedicated hardware so that the cuda driver can be manipulated for this application only and not break any other applications in the process.
I am writing (hopefully) a simply AWS Lambda that will do an RDS Oracle SQL SELECT and email the results. So far I have been using the Lambda Management Console, but all the examples I've run across talk about making a Lambda Deployment Package. So my first question is can I do this from the Lambda Management Console?
Next question I have is what to import for the Oracle DB API? In all the examples I have seen, they download and build a package with pip, but that would then seem to imply using a Deployment Package (see above). Trying to import any of these modules listed in the examples simply give "No module named "...
After writing the above I dug into the boto3 API referrence and came up with:
import boto3
client = boto3.client('rds-data')
But it gives the error: Unknown service: 'rds-data'.
So I'm still lost.
As you can probably tell, I'm new to the Lambda environment. Any suggestions or examples would be greatly appreciated. Thanks.
This is an update of the solution using the 18c Oracle client libraries. If it wasn't for main solution it would have taken me a lot longer to get my code working. This will hopefully help anyone that follows.
(an aside - I tried getting it working with the instantclient_19_3 but went round in circles for a day, and then tried with instantclient_18_5 and it worked)
Files downloaded and used
instantclient-basic-linux.x64-18.5.0.0.0dbru.zip (all files)
cx_Oracle 7.2.2 (https://cx-oracle.readthedocs.io/en/latest/release_notes.html#releasenotes)
libaio.so.1.0.1 (as described in main answer, renamed to libaio.so.1)
This then gave these files in the zip (lambda_function.py is my python source code)
zip contents
Apparently, AWS Lambda is using an older version of boto3, which does not have rds-data yet.
So I'm afraid you will have to create a deployment package containing a more recent version of boto3.
One way to do this, would be to:
Create your lambda handler file (in this case named index.py).
def my_handler(event, context):
client = boto3.client('rds-data')
print(client)
# do stuff
return "hello world"
Add a requirements.txt file in the same folder, which will contain something like:
awscli >= 1.16.118
boto3 >= 1.9.108
Now run this (depending on the setup on your computer, you can use pip instead of pip3) in the directory/folder of your index and requirement file:
pip3 install -r requirements.txt -t .
zip -r somezipname .
Next, upload this zip and change your handler 'entry point' to index.my_handler. The code should now run without errors.
older version of boto3 does not support rds-data.
but you can deploy package with zip folder.
i recommend you to use import cx-oracle
for that install cx-oracle using pip
and upload zip packages. check this
[How can I access Oracle from Python?
After much groaning and gnashing of teeth I have come up with a successful solution.
rds_data (as confirmed by AWS Support) is only supporting Aurora Databases. Wish the AWS documents mentioned this. 8{(>
Thanks to the answers above as well as Jason Landrey for hints as to the solution.
In order to access RDS/Oracle, you need to use cx_Oracle. But wait, there's more.
cx_Oracle is not in the standard Lambda environment, so you need to bring your own. My development environment is on Windows, but the Lambda environment is Linux. So, you need to download and install in your packaging directory I got mine from https://pypi.org/project/cx-Oracle/#files. Install locally with:
pip install cx_Oracle-7.1.2-cp37-cp37m-manylinux1_x86_64.whl -t .
You will see several file appear in . Then you need to find a Linux system and download /lib64/libaio.so.1.0.1 and call it libaio.so.1 in your packaging directory.
And then you need to download both Oracle instant client basic and SDK packages from http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html.
Create a zip file with all these items (including your own Python source). In doing so, rename Oracle instant client files libclntsh.so.11.1 to libclntsh.so and libocci.so.11.1 to libocci.so.
Upload the zip to a S3 bucket as the direct deploy is limited to 66mb and this zip is a bit larger.
Create a Lambda with the appropriate IAM permissions and VPC access, install the package and it should be good to go.
I found that if you don't include all the instant client files you start getting Oracle errors about missing timezone and NLS information.
List of zip contents (for me, YMMV):
7996693 08/24/2013 12:30 libnnz11.so
0 03/11/2019 16:10 cx_Oracle-7.1.1.data/
0 03/11/2019 16:10 cx_Oracle-7.1.1.data/data/
0 03/11/2019 16:10 cx_Oracle-7.1.1.data/data/cx_Oracle-doc/
0 03/11/2019 16:10 cx_Oracle-7.1.1.dist-info/
1325 03/13/2019 12:35 Email.py
1805 02/19/2019 21:11 cx_Oracle-7.1.1.data/data/cx_Oracle-doc/LICENSE.txt
163 02/19/2019 21:11 cx_Oracle-7.1.1.data/data/cx_Oracle-doc/README.txt
851 02/19/2019 21:11 cx_Oracle-7.1.1.dist-info/METADATA
628 02/19/2019 21:12 cx_Oracle-7.1.1.dist-info/RECORD
109 02/19/2019 21:12 cx_Oracle-7.1.1.dist-info/WHEEL
10 02/19/2019 21:11 cx_Oracle-7.1.1.dist-info/top_level.txt
2270301 02/19/2019 21:11 cx_Oracle.cpython-37m-x86_64-linux-gnu.so
2140 03/13/2019 14:21 getSecrets.py
5560 03/12/2019 08:48 libaio.so.1
53865194 08/24/2013 12:30 libclntsh.so
118738042 08/24/2013 12:30 libociei.so
7633 03/13/2019 16:39 scheduleReports.py
I am using some earlier build for Windows 64-bit downloaded form here:
dl.dropboxusercontent.com/u/63393258/osm2pgsql_testRelease.zip
from this website:
awcull.com/2015/09/30/postgis-osm2pgsql-windows.html
but it is crashing when I am importing large pbf with whole Europe downloaded from download.geofabrik.de/
I'm tired of this shit... I tried slim and non-slim mode, I tried modifying cache size, nothing worked so far. Our server has 32 GB of RAM.
Where can I download latest osm2pgsql build for Windows 64-bit? Alternatively which compiler do you suggest to make my own build on Windows Server 2012 64-bit. Thanks.
The command I run osm2pgsql last time it crashed was:
PS C:\OSM\rendering> osm2pgsql -U postgres -m -d osm -p osm -E 3857 -s -C 25000 -S C:\OSM\osm2pgsql\default.style C:\OSM\Data\europe-latest.osm.pbf
It crashed with standard Windows dialog saying "the application stopped blablabla" with details:
Problem signature:
Problem Event Name: APPCRASH
Application Name: osm2pgsql.exe
Application Version: 0.0.0.0
Application Timestamp: 53ea21fd
Fault Module Name: ntdll.dll
Fault Module Version: 6.3.9600.18438
Fault Module Timestamp: 57ae642e
Exception Code: c00000fd
Exception Offset: 0000000000030d02
OS Version: 6.3.9600.2.0.0.272.7
Locale ID: 1033
Additional Information 1: 33ad
Additional Information 2: 33ad00700702b0ab4dc632df7667ec82
Additional Information 3: 2ebb
Additional Information 4: 2ebbf5b91303f76c5b7f75f6255100fa
Read our privacy statement online:
http://go.microsoft.com/fwlink/?linkid=280262
If the online privacy statement is not available, please read our privacy statement offline:
C:\Windows\system32\en-US\erofflps.txt
Now I'm trying without "-C" option but I bet it will crash again...
PS C:\OSM\rendering> osm2pgsql -U postgres -m -d osm -p osm -E 3857 -s -S C:\OSM\osm2pgsql\default.style C:\OSM\Data\europe-latest.osm.pbf
Necromancing.
The latest build (Continuous Integration) can always be found on AppVeyor.
You need to get the current build (or in history a historic build by the git-commit hash).
https://ci.appveyor.com/project/openstreetmap/osm2pgsql
=> Environment arch x64
=> Artifacts tab
=> Donwload osm2pgsql_Release_x64.zip
The link might break, so if it does, you need to google "appveyor osm2pgsql",
it should usually be the first result.
Download from gis.stackexchange
Here is the Github link
Here is the Hot-Installer reference.
Is there any documentation available for the neo4j-import command line tool for neo4j 3.0? I've used the command line tool and the powershell script version in neo4j 2.3.3, but the same commands are not working for neo4j 3.0 when using the Windows 64bit version. I can successfully run the import using The Linux/UNIX (tar) version of neo4j 3.0.
bin\neo4j-import.bat --into "C:\Neo4j\TBR_3\data\databases\graph.db" --nodes "C:\Developer\Neo4j_Staging\TBR\header_person_neo4j.psv,C:\Developer\Neo4j_Staging\TBR\nodes_person_neo4j.psv" --nodes "C:\Developer\Neo4j_Staging\TBR\header_address_neo4j.psv,C:\Developer\Neo4j_Staging\TBR\nodes_address_neo4j.psv" --nodes "C:\Developer\Neo4j_Staging\TBR\header_telephone_neo4j.psv,C:\Developer\Neo4j_Staging\TBR\nodes_telephone_neo4j.psv" --relationships "C:\Developer\Neo4j_Staging\TBR\header_personAddress_neo4j.psv,C:\Developer\Neo4j_Staging\TBR\rels_personAddress_neo4j.psv" --relationships "C:\Developer\Neo4j_Staging\TBR\header_personTelephone_neo4j.psv,C:\Developer\Neo4j_Staging\TBR\rels_personTelephone_neo4j.psv" --delimiter "|"
gives me:
'")"' is not recognized as an internal or external command,
operable program or batch file.
To get an idea of the structure of the headers I'm using, the header_person_neo4j.psv file contains:
masterIndividualKey:ID(Person)|matchIndividualKey:int|individualKey:int|stdTitle:string[]|stdTitleAlias:string[]|stdForename:string[]|stdForenameAlias:string[]|stdForenameNYSIIS:string[]|stdForenameSoundex:string[]|stdForenameDoubleMetaphone:string[]|stdOthername:string[]|stdOthernameAlias:string[]|stdOthernameNYSIIS:string[]|stdOthernameSoundex:string[]|stdOthernameDoubleMetaphone:string[]|stdSurname:string[]|stdSurnameNYSIIS:string[]|stdSurnameSoundex:string[]|stdSurnameDoubleMetaphone:string[]|stdGender:string[]|stdNameQuality:int[]|stdDOB:string|stdDOBMax:string|stdDOBMin:string|stdHierarchy:int[]|stdRecency:string[]|stdRecencyMax:string|stdHierarchyMin:int|stdPreferredName:int|stdGenderCombined:string|title:string|forename:string|surname:string|gender:string|nameScore:int|dateOfBirth:string|dateOfDeath:string|deathDateActual:string|:LABEL
Here is the import tool documentation which explains with examples. The problem is with your file URL, in Windows you have to use different file URL for example as bellow:
neo4j-import --into C:/neo4j/data/databases/MEDIUM_GRAPH --id-type integer --stacktrace --nodes:STORE "C:/nodes/store_header.csv,C:/nodes/dimstore.csv"
This blog is also useful to understand different methods tricks. All data by default are imported as string in neo4j so you do not need to specify string data type in your csv headers. Refer to this document to get information about csv file header format.