CDX File not update after add a new recoed into DBF File - visual-foxpro

I would like to add a new record into dbf File(file name is STCRD.dbf). I added a new record by using visual studio 2017 > data connectionse as below picture.
After a new record is added, the cdx file, which is name STCRD.cdx, is not updated. I'm a newbie in visual foxpro. I still did not know after cdx file is changed, the dbf file should be changed as well or not. Can anyone point me to the correct way to add a new record into this kind of database?
I found some link that contain some information, but I cannot understand the answer about issueing the pack command.

You may want to check other posts about working with VFP and OleDb connections. The connection should point to the PATH that the files are located, not the individual file names / dbf / cdx / fpt.
So your connection string should point to just the
D:\Work\Sirichai\TNPSE
Then your select, insert, update, delete can just refer to the table and SHOULD work without issue...
select * from STCRD where ....
insert into STCRD ( fld1, fld2, fld3 ) values ( ?, ?, ? )
and parameterize the queries

I'm a newbie in visual foxpro. I still did not know after cdx file is
changed, the dbf file should be changed as well or not. Can anyone
point me to the correct way to add a new record into this kind of
database?
In VFP you create a Data Table (a DBF file) on which you can create an associated Index file (a CDX file).
Something like:
CREATE TABLE MyTable (field1 C(10), field2 M, Field3 D)
SELECT MyTable
INDEX ON field1 TAG firstfld
Perhaps spending some time at the free, on-line VFP tutorials will help.
Free On-line VFP tutorials
Modifications to the Data Table (the DBF file - STCRD.dbf) will automatically 'trigger' the associated Index file (the CDX file) to update.
When you create an ODBC Connection you do so to the DBF file (STCRD.dbf), NOT the CDX file. Once your Data Table 'knows' about its associated Index file, CDX file behavior is 'automatic'. You don't work directly with the CDX file.
I guess that, for me, it is unclear what you are trying to accomplish. Get the ODBC Connection set up correctly and then explain clearly what it is that you are having trouble with.
Side Note: A DATABASE is a container (some are 'intelligent' like M$ SQL Server) and others are not (like a VFP Database), but they themselves are Not Data tables. The Data Table may or may not reside within a DATABASE, but it is the Data Table (not the DATABASE) in which data records 'live'.
Good Luck

Related

Creating txt file using Pentaho

I'm currently trying to create txt files from all tables in the dbo schema
I have like 200s-300s tables there, so it would takes up too much times to create it manually..
I was thinking for creating a loop.
so as example (using AdventureWorks2019) :
select t.name as table_name
from sys.tables t
where schema_name(t.schema_id) = 'Person'
order by table_name;
This would get all the table name within the Person schema.
So I would loop :
Table input : select * from ${table_name}
But then i realized that for txt files, i need to declare all the field and their data types in pentaho, so it would become a problems.
Any ideas how to do this "backup" txt files?
Using Metadata Injection and more queries to the schema catalog tables in SQL Server. You not only need to retrieve the table name, you would need to afterwards retrieve the columns in that table and the data types, and inject that information (metadata) to the text output step.
You have in the samples directory of your spoon installation an example on how to use Metadata Injection, use it, along with the documentation, to build a simple example (the check to generate a transformation with the metadata you have injected is of great use to debug)
I have something similar to copy data from one database to another, both in Oracle, but with SQL Server you have similar catalog tables as in Oracle to retrieve the information you need. I created a simple, almost empty transformation to read one table and write to another. This transformation has almost no information, only the database origin in the Table Input step and the target database in the Table Output step:
And then I have a second transformation where I fill up all the information (metadata) to inject: The query to perform in the Table Input step, and all the data I need in the Table Output: Target table, if I need to truncate before inserting, the columns from (stream field) and to (Table field):

updating data in external table

Lets assume the following scenario :
I have several users that will prepare .csv files (not being aware of each other so concurrency is possible).
The .csv file will always be in same format.
The data in the .csv file will contain a list of ids together with some other columns like update_date.
Based on that data i will create a procedure that will update data in real DB table.
The idea is to use external tables, to maximally simplify it for the .csv creators, so they will put files in a folder and stuff will be done for them, rest is my job.
The questions are :
Can i have several files as the source for 1 external table or i need 1 ext table for each file (and what i mean here is whenever there is new func call to load data from csv, it should be added to existing external table...so not all files are being loaded at once)
Can i update records/fields in external table.
External table basically allowes to query the data stored in the external file(s). So from this point you can't issue an UPDATE on it.
You can
1) add new files in the directory and ALTER the table
ALTER TABLE my_ex LOCATION ('file1.csv','file2.csv');
2) you can of course modify the existing files as well. There is no database state of the external table, each SELECT loads the data in the database, so you will always see the "updated" status.
** UPDATE **
An attempt to modify (e.g. UPDATE) leads to ORA-30657 operation not supported on external organized table.
To be able to maintain status in the database the data must be first copied in a regular table (CTAS - create table as select from the external table).

Where Can I find the source of this temp file in VB 6.0

I Want to know the sourse of this table. How it is calculated from the table. I am using sql server r2 2008 and I searched for that table, but it is not there. It is formed by manipulating some rows of different tables. Is there any way to find it. I searched the corresponding table in VB 6 also. but it is not there. Is there Any way to find the source table?
Source in local variables is :
"Select * From #70554TempShiz52"
Tables with name starting with # or ## are temporary tables (Quick Overview: Temporary Tables in SQL Server 2005).
The table exists only as long as the connection in which table was created exists. It is accessible only from connection which has created it.
To find the table you should look for an appropriate statement CREATE TABLE #70554TempShiz52 in the code.
The table is exists in tempdb database. An admin can see it there using ssms (only when the connection is still open and table was not dropped). I usually put a breakpoint to achieve desired state. The name of the table looks like #70554TempShiz52__________...some number (to distinguish tables from other users).
I can be useful to use a name starting with ## for debugging because such a table is visible from other connections.

Attempting to use SQL-Developer to analyze a system table dump created with 'exp'

I'm attempting to recover the data from a specific table that exists in a system table dump I performed earlier. I would like to append the rows existing in the dump to any rows that may exist in the active table. The problem is, it's likely that the name of the table in the dump is not the same as what exists in the database currently (They're dynamically created with a prefix of ARC_TREND_). In addition, I don't know the name of the table as it exists in the dump, I was hoping to use SQL Developer to analyze the dump file as I can recognize the correct table by it's columns and it's existing rows.
While i'm going on blind faith that SQL Developer can work with my dump file, when attempting to open it, i'm getting a Java Heap OutOfMemory exception raised. I've adjusted the maximum heap size from 640m to 1024m in both sqldeveloper.bat and in sqldeveloper.conf, but to no avail.
Can someone recommend a course of action for me to take to recover the data from a table which exists in a exp created dump file? A graphical tool would be nice, but I'm no stranger to the command line. I need to analyze the tables that exist in the dump in order to pick the correct one out. Then I assume I can use imp TABLE= to bring it back into the active instance. It likely won't match the existing table name, so I will use SQL Developer to copy the rows from the imported table to the table where I need them to be.
The dump was taken from a Linux server running 10g, and will be imported to (the same server & database instance, upgraded) an 11g instance of the same database.
Thanks
Since you're referring to imp rather than impdp, I assume this wasn't exported with data pump. Either way, I doubt you'll get anything useful through SQL Developer.
Fortunately most of what you're trying to do is quite easy from the command line; just run imp with the INDEXFILE parameter, which will give you a text file containing all the table (commented out with REM) and index creation commands. From that you should be able to spot the table from its column names.
You can't really see any row data though, so if there's more than one possible match you might need to import several tables and inspect the data in them in the database to see which one you really want.

How should I manage my many-to-many relationships?

I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables.
Here's the schema of each table:
files -> file_id, file_name
users -> user_id, user_name
users_files_ref -> user_file_ref_id, user_id, file_id
I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem.
Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record.
I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario.
I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-)
I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario?
Thanks for any help provided, it's much appreciated.
-Wes
Use $this->db->insert_id() to get the id number of the row you just inserted. Further documentation here: http://codeigniter.com/user_guide/database/helpers.html
You're basically describing how it normally is done, with one important adjustment: how you retrieve the file_id of the file to be able to add it to users_files_ref.
Normally in a database environment you have many clients connecting at the same time, doing updates simultaneously. In such an environment you can't just get the file_id of the last file added - it might be someone elses file added in between your DB calls. You have to use functionality of the database to get the ID generated (e.g. SELECT ##IDENTITY on MSSQL) or generate the IDs in the application code somehow.
I think what you need is just this:
----primary key-----
users_files_ref -> | user_id, file_id |
How you get the the file_id is dependent on the code you're implementing. Your reasoning is correct. You already have the user_id and just need to get the file_id. With these values you can add a new row to user_files_ref.
When I need to do this I usually have a stored procedure with the help of a sequence that inserts the file and returns the sequence NEXTVAL as the output. This might be a way of implementing such cenario.
This is the code for an Oracle based stored procedure:
CREATE OR REPLACE PROCEDURE SP_IMPORT_FILE(FILE IN FILE.FILE%TYPE,
FILE_ID OUT NUMBER)
IS
BEGIN
SELECT SEQ_FILE.NEXTVAL INTO FILE_ID from DUAL;
INSERT INTO FILE (FILE_ID, FILE) VALUES (FILE_ID, FILE);
END SP_IMPORT_FILE;

Resources