How to use ligth4j to operate mysql? The demo result from github seems incomplete. My project reference link is https://github.com/networknt/light-example-4j
Light4J doesn't implement anything to operate mysql, you could use directly the java libraries to do it (java.sql, javax.sql). If you need it, you could initialize a connection pool settings with StartupHookProvider and create your own MysqlStartupHookProvider
(review the mongoDb example: https://github.com/networknt/light-example-4j/blob/release/mongodb/src/main/java/com/networknt/database/db/MongoStartupHookProvider.java)
Related
I have a huge sqlite file containing my db. I need to know if it is possible and how to connect to this db as an embedded one with jpa.
I'm developing an app that packs this database inside it's own jar so that when I use it on another system I don't have to import a copy of my db back and forth.
The technologies I'd like to use are Angular and Spring since those are the ones I know best. If there are some techonlogies that better suit this purpose I'd like some suggestions.
Thanks :)
I hope I undestood your question correctly, so I made a small project for you, hence you can have a look into it: spring-jpa-sqlite-sample. It may guide you a bit, though I and don't claim correctness or completeness.
The path to the sqlite file can easily be changed by inserting the correct url in the persistence.properties file:
driverClassName=org.sqlite.JDBC
url=jdbc:sqlite:src/main/resources/chinook.db --> you may use relative paths.
hibernate.dialect=dev.mutiny.semo.config.SQLiteDataTypesConfig
hibernate.hbm2ddl.auto=none
hibernate.show_sql=true
You can also use Environment variables from your system, which Spring tries to read from, so that you can reference the correct directory to a file. This can be found here: Read system environment var (SO)
Last but not least. Beware of using huge SQLite files. Find another way and transfer it first into a 'real' Database like any other Client/Server RDBMS you know (Oracle, MariaDB, MSSQL, depends on your scenario/taste).
Have closer look onto the documentation: When to use SQLite (and when not to!)
i am using pyramid_beaker as session factory .I want to save session in mysql database.so i want to know how to configure that?
i have gone through this
http://docs.pylonsproject.org/projects/pyramid_beaker/en/latest/
but it does not solve my problem.
it does not give clue where to write mysql username ,password etc.
pyramid_beaker is a thin wrapper around beaker which can pull the settings from your INI file into beaker. Beaker [1] itself which contains docs on how to use its various backends. For example, if you're using the beaker.ext.database backend, then you should set session.url = mysql://user:password#host:port/dbname just like any other SQLAlchemy connection string.
[1] https://beaker.readthedocs.io/en/latest/configuration.html#options-for-sessions-and-caching
I am new to Mule and I have been struggling with a simple issue for a while now. I am trying to connect to flat files (.MDB, .DBF) located on a remote desktop through my Mule application using the generic database connector of Mule. I have tried different things here:
I am using StelsDBF and StelsMDB drivers for the JDBC connectivity. I tried connecting directly using jdbc URL - jdbc:jstels:mdb:host/path
I have also tried to access through FTP by using FileZilla server on remote desktop and using jdbc URL in my app - jdbc:jstels:dbf:ftp://user:password#host:21/path
None of these seem to be working as I am always getting Connection exceptions. If anyone has tried this before, what is the best way to go about it? Connecting a remote flat file with Mule? Your response on this will be greatly appreciated!
If you want to load the contents of the file inside a Mule flow you should use the file or FTP connector, i don't know for sure about your JDBC option.
With the File connector you can access local files (files on the server where mule is running), you could try to mount the folders as a share.
Or run an FTP server like you already tried, that should work.
There is probably an error in your syntax / connection.
Please paste the complete XML of your Mule flow so we can see what you are trying to do.
Your usecase is still not really clear to me, are you really planning to use http to trigger the DB everytime? Anyway did you try putting the file on a local path and use that path in your database url. Here is someone that says he had it working, he created a separate bean.
http://forums.mulesoft.com/questions/6422/setting_property_dynamically_on_jdbcdatasource.html
I think a local path is maybe possible and it's better to test that first.
Also take note of how to refer to a file path, look at the examples for the file connector: https://docs.mulesoft.com/mule-user-guide/v/3.7/file-transport-reference#namespace-and-syntax
If you manage to get it working and you can use the path directly in the JDBC url, you should have a look at the poll scope.
https://docs.mulesoft.com/mule-user-guide/v/3.7/poll-reference
You can use your DB connector as an inbound endpoint when wrapped in a poll scope.
I experienced the same issue when connect to Microsoft Access Database (*.mdb, *.accdb) using Mule Database Connector. After further investigation, it's solved by installing Microsoft Access Database Engine
Another issue, I couldn't pass parameter to construct a query as same as I do for other databases. e.g.: SELECT * FROM emplcopy WHERE id = #[payload.id]
To solve this issue:
I changed the Query type from Parameterized into Dynamic.
I generated the query inside Set Payload transformer (generate the query in form of String, e.g.: SELECT * FROM emplcopy WHERE id = '1').
Finally, put it into the Dynamic query area: #[payload]
I've been using MongoDB for a little tool that I'm building, but I have two problems that I don't know if I can "solve". Those problems are mainly related with having to start a MongoDB server (mongod).
The first is that I have to run two commands every time that I want to use it (mongod and my app's command) and the other is testing. For now, I'm using different collections for "production" and "test", but it would be better to have just an embedded / self-contained instance that I can start and drop whenever I want.
Is that possible? Or should I just use something else, like SQLite for that?
Thanks!
Another similar project is https://github.com/Softmotions/ejdb.
The query syntax is similar to mongodb.
We use this at work - https://github.com/flapdoodle-oss/embedmongo.flapdoodle.de - to fire up embedded Mongo for integration tests. Has worked really well.
I haven't tried it, but I just found this Ruby implementation of an embedded MongoDB: https://github.com/gdb/embedded-mongo
I'm pretty sure the answer is "no" but I thought I'd check.
Background:
I have some legacy data in Access, need to get it into MySQL, which will be the DB server for a Ruby application that uses this legacy data.
Data has to be processed and transformed. Access and MySQL schemas are totally different. I want to write a rake task in Ruby to do the migration.
I'm planning to use the techniques outlined in this blog post: Using Ruby and ADO to Work with Access Databases. But I could use a different technique if it solves the problem.
I'm comfortable working on Unix-like computers, such as Macs. I avoid working in Windows because it fills me with deep existential horror.
Is there a practical way that I can write and run my rake task on my Mac and have it reach across the network to the grunting Mordor that is my Windows box and delicately pluck the data out like a team of commandos rescuing a group of hostages? Or do I have to just write this and run it on Windows?
Why don't you export it from MS-Access into Excel or CSV files and then import it into a separate MySQL database? Then you can rake the new one to your heart's content.
Mac ODBC drivers that open Access databases are available for about $30.00
http://www.actualtechnologies.com/product_access.php is one. I just run access inside vmware on my mac and expore to csv/excel as CodeSlave mentioned.
ODBC might be handy in case you want to use the access database to do a more direct transfer.
Hope that helps.
I had a similar issue where I wanted to use ruby with sql server. The best solution I found was using jruby with the java jdbc drivers. I'm guessing this will work with access as well, but I don't know anything about access