I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables.
Here's the schema of each table:
files -> file_id, file_name
users -> user_id, user_name
users_files_ref -> user_file_ref_id, user_id, file_id
I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem.
Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record.
I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario.
I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-)
I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario?
Thanks for any help provided, it's much appreciated.
-Wes
Use $this->db->insert_id() to get the id number of the row you just inserted. Further documentation here: http://codeigniter.com/user_guide/database/helpers.html
You're basically describing how it normally is done, with one important adjustment: how you retrieve the file_id of the file to be able to add it to users_files_ref.
Normally in a database environment you have many clients connecting at the same time, doing updates simultaneously. In such an environment you can't just get the file_id of the last file added - it might be someone elses file added in between your DB calls. You have to use functionality of the database to get the ID generated (e.g. SELECT ##IDENTITY on MSSQL) or generate the IDs in the application code somehow.
I think what you need is just this:
----primary key-----
users_files_ref -> | user_id, file_id |
How you get the the file_id is dependent on the code you're implementing. Your reasoning is correct. You already have the user_id and just need to get the file_id. With these values you can add a new row to user_files_ref.
When I need to do this I usually have a stored procedure with the help of a sequence that inserts the file and returns the sequence NEXTVAL as the output. This might be a way of implementing such cenario.
This is the code for an Oracle based stored procedure:
CREATE OR REPLACE PROCEDURE SP_IMPORT_FILE(FILE IN FILE.FILE%TYPE,
FILE_ID OUT NUMBER)
IS
BEGIN
SELECT SEQ_FILE.NEXTVAL INTO FILE_ID from DUAL;
INSERT INTO FILE (FILE_ID, FILE) VALUES (FILE_ID, FILE);
END SP_IMPORT_FILE;
Related
I'm currently trying to create txt files from all tables in the dbo schema
I have like 200s-300s tables there, so it would takes up too much times to create it manually..
I was thinking for creating a loop.
so as example (using AdventureWorks2019) :
select t.name as table_name
from sys.tables t
where schema_name(t.schema_id) = 'Person'
order by table_name;
This would get all the table name within the Person schema.
So I would loop :
Table input : select * from ${table_name}
But then i realized that for txt files, i need to declare all the field and their data types in pentaho, so it would become a problems.
Any ideas how to do this "backup" txt files?
Using Metadata Injection and more queries to the schema catalog tables in SQL Server. You not only need to retrieve the table name, you would need to afterwards retrieve the columns in that table and the data types, and inject that information (metadata) to the text output step.
You have in the samples directory of your spoon installation an example on how to use Metadata Injection, use it, along with the documentation, to build a simple example (the check to generate a transformation with the metadata you have injected is of great use to debug)
I have something similar to copy data from one database to another, both in Oracle, but with SQL Server you have similar catalog tables as in Oracle to retrieve the information you need. I created a simple, almost empty transformation to read one table and write to another. This transformation has almost no information, only the database origin in the Table Input step and the target database in the Table Output step:
And then I have a second transformation where I fill up all the information (metadata) to inject: The query to perform in the Table Input step, and all the data I need in the Table Output: Target table, if I need to truncate before inserting, the columns from (stream field) and to (Table field):
How to compare table data structure.
1. Any table added or deleted.
2. Any column in the tables added or deleted.
So my job is to verify if any table or columns are added/deleted on 1st of every month.
My plan is to run a sql query and take a copy of entire list of tables and it's data type only (NO DATA) and save it in txt file or something and use it as base line, and next month run the same sql query and get the results and compare the file. is it possible? please help with the sql query which can do this job.
This query will give you a list of all tables and their columns for a given user (just replace ABCD in this query for the user you have to audit and providing you have access to all that users tables this will work).
SELECT table_name,
column_name
FROM all_tab_columns
WHERE owner = 'ABCD'
ORDER
BY table_name,
column_id;
This answers your question but I have to agree with a_horse_with_no_name that is not a good way implement change control, most notably because the changes have already happened.
This query is very basic and doesn't give you all the information you'd need to see if a column has changed (or any information about other objects types etc), but then you only asked about additions and deletions of tables and columns and you can compare the output of this script to previous outputs to find the answer to your allotted task.
For a project, I want to have a "History" table for my records. I have two tables for this (example) system:
RECORDS
ID
NAME
CREATE_DATE
RECORDS_HISTORY
ID
RECORDS_ID
LOG_DATE
LOG_TYPE
MESSAGE
When I insert a record into RECORDS, how can I automatically create an associated entry in RECORDS_HISTORY where RECORDS_ID is equal to the newly inserted ID in RECORDS?
I currently have a sequence on the ID in RECORDS to automatically increment when a new row is inserted, but I am unsure how to prepopulate a record in RECORDS_HISTORY that will look like this for each newly created (not updated) record.
INSERT INTO RECORDS_HISTORY (RECORDS_ID, LOG_DATE, LOG_TYPE, MESSAGE) VALUES (<records.id>, sysdate(), 'CREATED', 'Record created')
How can I create this associated _HISTORY record on creation?
You didn't mention the DB you are working with. I assume its Oracle. The most obvious answer is: Use a "On Insert Trigger". You even can get back the ID (sequence) from the insert statement into table RECORDS. Disadvantages of this solution: Triggers are kinda "hidden" code, can slow down processes on massive inserts and you consume like double diskspace on storing data partially redundant. What if RECORDS got updated or deleted? Can that happen and do you have to take care of that as well? The big question is: What is your goal?
There are proved historisation concepts around. Have a look at this: https://en.wikipedia.org/wiki/Slowly_changing_dimension
I need sone help in PL/SQl.
So my problem is, the following problem:
There is a table called: temp_table and I need to create a temp_table without drop/truncate option. It needs because all the time a table's data changing.
So I know its weird, but this is necessary for my daily job.
The script work like this:
The script does a text import to table, and the table is given. It use a dblink to connect the database. It works, but all the time I have to use DROP. But I need ( if its possible) to create an existing table without drop/truncate.
Can someone help me?
Thanks a lot.
Sorry for no sql code, but i think it doesn't necessary.
I think the concept you want is the external table. With external tables the data resides in OS files, such as CSVs. This allows us to swap data sets without dropping the table.
Find out more.
I take it you want to drop the table because you want to reload it, but you also want there to be as close to constant up-time as possible?
I would create two temp tables. You already have one called:
temp_table
Create another called:
temp_table_new
Load your new data into temp_table_new, then run a rename on it like so:
RENAME TABLE temp_table TO temp_table_old
temp_table_new TO temp_table
Then
drop table temp_table_old
This will be super fast, have very little downtime, and allow you to have the functionality you've described.
I have come into a bump at my current company where they have an account and a member. For some reason or another both are stored in separate tables.
Right now a member and an account can be registered. That's fine, except the users of both member and an account can have the same username. This is of course as you all know just wrong. Especially since they use the username to login to the same system except with different functionality levels.
Right now we are doing a check at the application level, and we're just wondering if it's possible to get the database to enforce two columns to be unique, say like a union of the two tables.
Can't set them up as primary or foreign key at the moment but that's for future anyway. Right now looking for a quick fix. In the future I will probably merge databases and get all members added on as new rows in the account table and add a boolean for IsMember.
In general, I agree with the consensus opinion that it's better to fix the design than to kluge a fix using triggers. However, a properly implemented trigger-based solution is still probably better than your current situation.
If you're going to use triggers, the right way to do it is to:
Create a new table that will contain nothing but usernames, with a primary key enforcing uniqueness (this may, in fact, be a good candidate for an indes-organized table).
Create before-insert triggers on both existing tables that add the new username to the new table. If the new username already exists, an error will be thrown, preventing the insert of both rows. Of course, the application will need to be able to handle this error gracefully (presumably it already can, for scenarios in which the new username already exists in the table it's being added to).
The wrong way to do this would be to make the trigger select from the other table, in order to verify uniqueness.
You can add a trigger that enforces your requirement.
The recommended triggers tend to be really brittle with concurrent transaction.
What you can do (AFAIK) is to create a materialized view containing the union of the column in question and put a unique constraint on that column.
Make sure you do some performance tests though.
As you use a soft delete pattern.
A trigger could be used (on each table) as a temporary measure.
By inserting a disabled record in the the other table, you will get a failure if the other record already exists
Remember this will not enforce the rule on existing data, only records that are inserted will be checked
Something like this:
-- Insert into the accounts table too
CREATE OR REPLACE TRIGGER tr_member_chk
BEFORE INSERT ON members
FOR EACH ROW
BEGIN
INSERT INTO account (name, id, etc, isenabled) VALUES(:new.name, :new.id, :new.etc, 0);
END;
-- Insert into the members table too
CREATE OR REPLACE TRIGGER tr_account_chk
BEFORE INSERT ON accounts
FOR EACH ROW
BEGIN
INSERT INTO members (name, id, etc,isenabled) VALUES(:new.name, :new.id, :new.etc,0);
END;