FoxPro My using .prg(coding) move fields - visual-foxpro

I would like toalter the table to move fields from one place to another.
ABS1
ABS2
ABS4
ABS8
ABS3
So I would like to move ABS3 after ABS2, but not move the physicly.
Would like the code do it for me.

Assuming that table is named "mytable.dbf" and you have exclusive access:
select * from mytable into table tmp
use in ('myTable')
erase ('myTable.dbf')
* erase ('myTable.fpt')
* erase ('myTable.cdx')
select ABS1, ABS2, ABS3, ABS4, ABS8 from tmp into table myTable
and then recreate the indexes as well.

Related

can't display table in multiple textbox

I must go through the records of a table and display them in multiple textboxes
I am using the table with four different alias to have four workareas on the same table and have four record pointers.
USE Customers ALIAS customers1
USE customers AGAIN ALIAS customers2
USE customers AGAIN ALIAS customers3
USE customers AGAIN ALIAS customers4
Thisform.TxtNomCli.ControlSource = "customers.name"
Thisform.TxtIdent.ControlSource = "customers.identify"
Thisform.TxtAddress.ControlSource = "customers.address"
Thisform.TxtTele.ControlSource = "customers.phone"
Thisform.TxtNomCli2.ControlSource = "customers2.name"
Thisform.TxtIdent2.ControlSource = "customers2.identify"
Thisform.TxtDirec2.ControlSource = "customers2.address"
Thisform.TxtTele2.ControlSource = "customers2.phone"
Thisform.TxtNomCli3.ControlSource = "customers3.name"
Thisform.TxtIdent3.ControlSource = "customers3.identify"
Thisform.TxtDirec3.ControlSource = "customers3.address"
Thisform.TxtTele3.ControlSource = "customers3.phone"
Thisform.TxtNomCli4.ControlSource = "customers4.name"
Thisform.TxtIdent4.ControlSource = "customers4.identify"
Thisform.TxtDirec4.ControlSource = "customers4.address"
Thisform.TxtTele4.ControlSource = "customers4.phone"
how to go through the records of the table, that in customers is in the first record, customers2 in the second record, customers3 in the third record and customers4 in the fourth record of the table?
How do I make each row of the textbox show the corresponding row of the table?
I would SQL Select id + whatever other fields you need into four cursors:
select id, identifica, nombre, direccion, telefono from customers ;
into cursor customers1 nofilter readwrite
select id, identifica, nombre, direccion, telefono from customers;
into cursor customers2 nofilter readwrite
* repeat for 3 and 4
Then set your ControlSources() to the cursors, not the base table. If you need to update records you can use the id of the modified record in the cursor to update the correct record in the base table.
You could simply use SET RELATION to achieve what you want. However, in your current code you are not really using 4 aliases. You are reopening the same table with a different alias in the same workarea and you would end up with a single table with alias Customers4. To do that correctly, you need to add "IN 0" clause to your USE commands. ie:
USE customers ALIAS customers1
USE customers IN 0 AGAIN ALIAS customers2
USE customers IN 0 AGAIN ALIAS customers3
USE customers IN 0 AGAIN ALIAS customers4
SELECT customers1
SET RELATION TO ;
RECNO()+1 INTO Customers2, ;
RECNO()+2 INTO Customers3, ;
RECNO()+3 INTO Customers4 IN Customers1
With this setup, as you move the pointer in Customers1 it would move in all other 3 aliases accordingly (note that there is no order set).
Having said these, now you should think why you need to do this? Maybe having another control like a grid is the way to go? Or using an array might be a better way to control this? ie: With an array:
USE (_samples+'data\customer') ALIAS customers
LOCAL ARRAY laCustomers[4]
LOCAL ix
FOR ix=1 TO 4
GO m.ix
SCATTER NAME laCustomers[m.ix]
ENDFOR
? laCustomers[1].Cust_id, laCustomers[2].Cust_id, laCustomers[3].Cust_id, laCustomers[4].Cust_id
With this approach, you could set your controlsources to be laCustomers[1].Identify, laCustomers[1].name and so on. While saving back to data, you would go to related record and do a GATHER. That would be all.
First you need to think about what you really want to do.

SQLAlchemy: substitute a view for a table when selecting

I've a normal select:
e = session.query(MyTable).filter(MyTable.pk=='abc').all()
Where MyTable is mapped to my_table in db.
I've also created a derived view my_view in database that has the exact same named columns as my_table.
Is there a way to substitute in my_view for my_table at query time so that I get back rows from the view? Obviously the resulting objects would need to be read only - am not intending to alter them.
So basically I'd want the SQL to be
FROM my_view AS my_table
instead of
FROM my_table
With everything else the same in the query.
I'd prefer not to create another mapper unless it can be done automatically somehow as MyView has over 60 columns same as MyTable.
Update: select_entity_from seems to be what I need, but in this case it just adds to the FROM tables, instead of replacing:
v = Table('my_view', metadata, autoload=True)
print session.query(MyTable).select_entity_from(v).filter(MyTable.pk=='abc')
"SELECT ... FROM my_table, my_view WHERE my_table.pk = 'abc';"
But the following only has one FROM entity:
print session.query(MyTable).\
select_entity_from(select([MyTable])).\
filter(MyTable.pk=='abc')
"SELECT ... FROM (SELECT ... FROM my_tabl) AS anon_1 WHERE anon_1.pk = 'abc';"
Further digging has got me closer to the answer but no joy. Posting progress here in case it solicits further thoughts:
from sqlalchemy.orm.util import aliased
session.query(MyTable).select_entity_from(
select([aliased(MyTable, alias=v, adapt_on_names=True)])).\
filter(MyTable.pk=='abc')
adapt_on_names seems to be what I need, and I've double checked that the view has the exact same column names as the table, but the above still produces two froms.
Filed what I thought was a bug: https://bitbucket.org/zzzeek/sqlalchemy/issues/3933/allow-passing-aliased-to
Final answer was:
session.query(MyTable).select_entity_from(
text('SELECT * FROM my_view').columns(*MyTable.__table__.columns)).\
filter(MyTable.pk=='abc')

Delete duplicate rows from a BigQuery table

I have a table with >1M rows of data and 20+ columns.
Within my table (tableX) I have identified duplicate records (~80k) in one particular column (troubleColumn).
If possible I would like to retain the original table name and remove the duplicate records from my problematic column otherwise I could create a new table (tableXfinal) with the same schema but without the duplicates.
I am not proficient in SQL or any other programming language so please excuse my ignorance.
delete from Accidents.CleanedFilledCombined
where Fixed_Accident_Index
in(select Fixed_Accident_Index from Accidents.CleanedFilledCombined
group by Fixed_Accident_Index
having count(Fixed_Accident_Index) >1);
You can remove duplicates by running a query that rewrites your table (you can use the same table as the destination, or you can create a new table, verify that it has what you want, and then copy it over the old table).
A query that should work is here:
SELECT *
FROM (
SELECT
*,
ROW_NUMBER()
OVER (PARTITION BY Fixed_Accident_Index)
row_number
FROM Accidents.CleanedFilledCombined
)
WHERE row_number = 1
UPDATE 2019: To de-duplicate rows on a single partition with a MERGE, see:
https://stackoverflow.com/a/57900778/132438
An alternative to Jordan's answer - this one scales better when having too many duplicates:
#standardSQL
SELECT event.* FROM (
SELECT ARRAY_AGG(
t ORDER BY t.created_at DESC LIMIT 1
)[OFFSET(0)] event
FROM `githubarchive.month.201706` t
# GROUP BY the id you are de-duplicating by
GROUP BY actor.id
)
Or a shorter version (takes any row, instead of the newest one):
SELECT k.*
FROM (
SELECT ARRAY_AGG(x LIMIT 1)[OFFSET(0)] k
FROM `fh-bigquery.reddit_comments.2017_01` x
GROUP BY id
)
To de-duplicate rows on an existing table:
CREATE OR REPLACE TABLE `deleting.deduplicating_table`
AS
# SELECT id FROM UNNEST([1,1,1,2,2]) id
SELECT k.*
FROM (
SELECT ARRAY_AGG(row LIMIT 1)[OFFSET(0)] k
FROM `deleting.deduplicating_table` row
GROUP BY id
)
Not sure why nobody mentioned DISTINCT query.
Here is the way to clean duplicate rows:
CREATE OR REPLACE TABLE project.dataset.table
AS
SELECT DISTINCT * FROM project.dataset.table
If your schema doesn’t have any records - below variation of Jordan’s answer will work well enough with writing over same table or new one, etc.
SELECT <list of original fields>
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY Fixed_Accident_Index) AS pos,
FROM Accidents.CleanedFilledCombined
)
WHERE pos = 1
In more generic case - with complex schema with records/netsed fields, etc. - above approach can be a challenge.
I would propose to try using Tabledata: insertAll API with rows[].insertId set to respective Fixed_Accident_Index for each row.
In this case duplicate rows will be eliminated by BigQuery
Of course, this will involve some client side coding - so might be not relevant for this particular question.
I havent tried this approach by myself either but feel it might be interesting to try :o)
If you have a large-size partitioned table, and only have duplicates in a certain partition range. You don't want to overscan nor process the whole table. use the MERGE SQL below with predicates on partition range:
-- WARNING: back up the table before this operation
-- FOR large size timestamp partitioned table
-- -------------------------------------------
-- -- To de-duplicate rows of a given range of a partition table, using surrage_key as unique id
-- -------------------------------------------
DECLARE dt_start DEFAULT TIMESTAMP("2019-09-17T00:00:00", "America/Los_Angeles") ;
DECLARE dt_end DEFAULT TIMESTAMP("2019-09-22T00:00:00", "America/Los_Angeles");
MERGE INTO `gcp_project`.`data_set`.`the_table` AS INTERNAL_DEST
USING (
SELECT k.*
FROM (
SELECT ARRAY_AGG(original_data LIMIT 1)[OFFSET(0)] k
FROM `gcp_project`.`data_set`.`the_table` AS original_data
WHERE stamp BETWEEN dt_start AND dt_end
GROUP BY surrogate_key
)
) AS INTERNAL_SOURCE
ON FALSE
WHEN NOT MATCHED BY SOURCE
AND INTERNAL_DEST.stamp BETWEEN dt_start AND dt_end -- remove all data in partiion range
THEN DELETE
WHEN NOT MATCHED THEN INSERT ROW
credit: https://gist.github.com/hui-zheng/f7e972bcbe9cde0c6cb6318f7270b67a
Easier answer, without a subselect
SELECT
*,
ROW_NUMBER()
OVER (PARTITION BY Fixed_Accident_Index)
row_number
FROM Accidents.CleanedFilledCombined
WHERE TRUE
QUALIFY row_number = 1
The Where True is neccesary because qualify needs a where, group by or having clause
Felipe's answer is the best approach for most cases. Here is a more elegant way to accomplish the same:
CREATE OR REPLACE TABLE Accidents.CleanedFilledCombined
AS
SELECT
Fixed_Accident_Index,
ARRAY_AGG(x LIMIT 1)[SAFE_OFFSET(0)].* EXCEPT(Fixed_Accident_Index)
FROM Accidents.CleanedFilledCombined AS x
GROUP BY Fixed_Accident_Index;
To be safe, make sure you backup the original table before you run this ^^
I don't recommend to use ROW NUMBER() OVER() approach if possible since you may run into BigQuery memory limits and get unexpected errors.
Update BigQuery schema with new table column as bq_uuid making it NULLABLE and type STRING

Create duplicate rows by running same command 5 times for example
insert into beginner-290513.917834811114.messages (id, type, flow, updated_at) Values(19999,"hello", "inbound", '2021-06-08T12:09:03.693646')
Check if duplicate entries exist
select * from beginner-290513.917834811114.messages where id = 19999
Use generate uuid function to generate uuid corresponding to each message

UPDATE beginner-290513.917834811114.messages
SET bq_uuid = GENERATE_UUID()
where id>0
Clean duplicate entries
DELETE FROM beginner-290513.917834811114.messages
WHERE bq_uuid IN
(SELECT bq_uuid
FROM
(SELECT bq_uuid,
ROW_NUMBER() OVER( PARTITION BY updated_at
ORDER BY bq_uuid ) AS row_num
FROM beginner-290513.917834811114.messages ) t
WHERE t.row_num > 1 );

Performance for validating data against various database tables

My Problem:
I "loop" over a table into a local structure named ls_eban..
and with those information I must follow these instructions:
ls_eban-matnr MUST BE in table zmd_scmi_st01 ( 1. control Table (global) )
ls_eban-werks MUST BE in table zmd_scmi_st05 ( 2. control Table (global) )
ls_eban-knttp MUST BE in table zmd_scmi_st06 ( 3. control Table (global) )
I need a selection that is clear and performant. I actually have one, but it isn't performant at all.
My solution:
SELECT st01~matnr st05~werks st06~knttp
FROM zmd_scmi_st01 AS st01
INNER JOIN zmd_scmi_st05 AS st05
ON st05~werks = ls_eban-werks
INNER JOIN zmd_scmi_st06 AS st06
ON knttp = ls_eban-knttp
INTO TABLE lt_control
WHERE st01~matnr = ls_eban-matnr AND st01~bedarf = 'X'
AND st05~bedarf = 'X'.
I also have to say, that the control tables doesn't have any relation with each other (no primary key and no secondary key).
The first thing you should not do is have the select inside the loop. Instead of
loop at lt_eban into ls_eban.
Select ....
endloop.
You should do a single select.
if lt_eban[] is not initial.
select ...
into table ...
from ...
for all entries in lt_eban
where ...
endif.
There may be more inefficiencies to be corrected if we had more info ( as mentioned in the comment by vwegert. For instance, really no keys on the control tables? ) but the select in a loop is the first thing that jumps out at me.

How to split FoxPro records?

I have 60,000 records in the dbf file in FoxPro. I want to split it into each 20,000 records (20000 * 3 = 60,000).
How can I achieve this?
I am new to FoxPro. I am using Visual FoxPro 5.0.
Thanks in advance.
You must issue a SKIP command when using the COPY command to make sure you are starting on the next record.
USE MyTable
GO TOP
COPY NEXT 20000 TO NewTable1
SKIP 1
COPY NEXT 20000 TO NewTable2
SKIP 1
COPY NEXT 20000 TO NewTable3
Todd's suggestion will work if you don't care how the records are split. If you want to divide them up based on their content, you'll want to do something like Stuart's first suggestion, though his exact answer will only work if the IDs for the records run from 1 to 60,000 in order.
What's the ultimate goal here? Why divide the table up?
Tamar
You can directly select from the first table:
SELECT * from MyBigTable INTO TABLE SmallTable1 WHERE ID < 20000
SELECT * from MyBigTable INTO TABLE SmallTable2 WHERE ID BETWEEN (20000, 39999)
SELECT * from MyBigTable INTO TABLE SmallTable3 WHERE ID > 39999
if you want more control, though, or you need to manipulate the data, you can use xbase code, something like this:
SELECT MyBigTable
scan
scatter name oRecord memo
if oRecord.Id < 20000
select SmallTable1
append blank
gather name oRecord memo
else if oRecord.Id < 40000
select SmallTable2
append blank
gather name oRecord memo
else
select SmallTable3
append blank
gather name oRecord memo
endscan
It's been a while since I used VFP and I don't have it here, so apologies for any syntax errors.
use in 0 YourTable
select YourTable
go top
copy to NewTable1 next 20000
copy to NewTable2 next 20000
copy to NewTable3 next 20000
If you wanted to split based on record numbers, try this:
SELECT * FROM table INTO TABLE tbl1 WHERE RECNO() <= 20000
SELECT * FROM table INTO TABLE tbl2 WHERE BETWEEN(RECNO(), 20001, 40000)
SELECT * FROM table INTO TABLE tbl3 WHERE RECNO() > 40000

Resources