SQLite-like alternative for MongoDB? [closed] - ruby

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm looking for a document-oriented db with a Ruby API that has SQLite-like properties:
self-contained,
serverless,
zero-configuration.
Are there light alternatives to MongoDB or CouchDB?
Is RDDB a possibility?
If not, what are the best paths to walk then?

I know, the question was asked 5 years ago, but just for completeness' sake, embedded MongoDB has happened since:
https://github.com/hamiltop/MongoLiteDB

It's not ready yet, but embeddable version of CouchDB are on the long term roadmap.
Replication is intended to enable offline applications with CouchDB. If you ended up with very specific needs you could replicate data from couchdb to a local datastructure, store it locally, update it, and push the data back via replication but it would take some code.

If you were using Perl, I'd recommend DBM::Deep, which stores arbitrary data structures on disk, including transactions with commit/rollback, and it's a non-C one-Perl-module install. Doesn't get much lighter than that.

I almost feel you could do some sort of hack to achieve this.
Have a table using sqlite's row ids along with a field for collection name and text blob that would be json code.
Have another table for indexing with fields in a collection (collection name, field name, field value, document row id).
You could do some wrapper class to handle things like updates and lookups. Would be interesting.

Related

To query efficiently in Oracle [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
The Common Log Format is a standardized text file format used by web servers when generating server log files. Example1:
127.0.0.1 - - [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326
Suppose that an Oracle database is used to store the access log of an e-commerce website with gigabytes of log data over the past six months. Discuss the options we may adopt and the steps involved such that the user can efficiently query all the IP addresses and the files accessed within any given time interval (with specified start time and end time).
If every log entry (such as the one you presented) is stored into one row in an Oracle table, then see if you can split it to store the IP address and date values into separate columns (shouldn't be difficult if format is fixed). Then index those columns and make access simpler & faster.
If that's not the case, investigate Oracle Text capabilities.

Creating event-driven SQL scripts [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am creating a database that stores GPS data. As soon as the database updates with a data point , I want the server to check to see if that point is within a certain area and send a message or update another database (haven't decided what action it should take yet). Is this event-driven operation possible in PL/SQL? I am only familiar with passive querying and running scheduled scripts.
Yes there is such feature called database triggers. On insert or update (actually there are much more event types) of the data you can check if some conditions are met and call PL/SQL procedure to handle the event.

Data dictionary report tool [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am asked to extract the oracle database dictionary from a tool. They used to do that with power designer 12.5. They generate a report and it represents in a html format. This report includes all tables and columns information’s, and programmers easily can ready it. The bad thing about it, it needs about a week to make such report (reverse engineering, customizing...). They are trying to find a fast tool so they can generate a daily data dictionary tool.
For now i have found Oracle Data Modeler, but I will download it to see if its a fast tool.
my question: do you know a fast tool to quickly generate a data dictionary report ?
Oracle's SQL Developer tool will produce an html formatted data dictionary very quickly and easily, as I recall. The data modeller functionality is probably more complex than you need.
the poor mans solution :
generate html report from sqlplus
break on owner , table_name skip 1
set html markup on
spool dailyDataDictionaryReport.html
select owner,table_name,column_name,data_type,data_length,data_precision
from all_tab_columns
where owner not in ('SYS','SYSOPER','XDB');
spool off
set html markup off
I think TOAD might have a good wziard for that.

using materialised views to fix bugs and reduce code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The application I'm working on has a legacy problem where 2 tables were created ADULT and CHILD in an oracle 11g dB.
This has led to a number of related tables that have both a field for ADULT and CHILD no FK applied.
The bugs have arisen where poor development has mapped relationships to the wrong field.
Our technical architect plans to merge the ADULT and CHILD tables in to a new ADULT_CHILD table and create materialised views in place of the tables. The plan is to also create a new id value and replace the I'd values in all associated tables so even if the plsql/apex code maps to the wrong field the data mapping will still be correct.
The reasoning behind this solution it it does not require that we change any other code.
My opinion is this is a fudge but I'm more a Java/.NET OO.
What arguments can I use to convince the architect this is wrong and not a real solution. I'm concerned we are creating a more complex solution and performance will be an issue.
Thanks for any pointers
While it may be a needed solution it might also create new issues. If you really do need to use an MV that is up to date at all times, you need on commit refresh and that in turn tends to make all updates sequential. Meaning that all processes writing to it waits in line for the one updating the table to commit. Not, the table, not the row.
So it is prudent to test the approach with realistic loads. Why does it have to become a single table? Could they not stay separate, add a FK? If you need more control on the updates, rename them and put views with instead-of triggers in their place.

where to get csv sample data? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
As part of my development I need to process some .csv files.
For what it matters I am writing a super fast CSV parser in java
I would like to ask if somebody can name some websites where I can find some good csv files so I can test my app.
Please don't tag this question is inappropriate I think developers would benefit from a list
of good sites where to find sample data
The baseball archive can be downloaded in CSV format. The batting statistics file contains a little over 90,000 rows of data which should be helpful in performance testing your app.
You can download the Sample CSV Data Files from this site.
Examples:
Sample Insurance Data
Real Estate Data
Sales Transactions Data
See also this question on sample data.
I've used http://www.fakenamegenerator.com for these purposes in the past.
Another good source is baseball reference. Pick whatever baseball player or manager you can think of.
http://www.baseball-reference.com/managers/coxbo01.shtml
This is a site that is in beta that can give you data in JSON, XML or CSV. All lists are customizable. This is a sample call to return data as CSV: http://mysafeinfo.com/api/data?list=dowjonescompanies&format=csv
Documentation on lists, formats and options under documentation: http://mysafeinfo.com/content/documentation -
Over 80 data sets available - see a full list under Datasets on the main menu
If you're looking for some large CSV files with real-world data, try http://www.baseball-databank.org.
Severals very nice testing csv files : http://support.spatialkey.com/spatialkey-sample-csv-data/
Sample insurance portfolio,
Real estate transactions,
Sales transactions,
Company Funding Records,
Crime Records
Thank you for the question !

Resources