I am looking to migrate ZenCart "customers" to an existing Magento website. I tried two extensions from MagentoConnect; however it doesn't works.
osCommerce Migration Tool osCommerce Import
There are premium 3rd party migration services available in market, however I would like to do this my own. Please help me out by providing some steps to this through code.
Currently the zen cart DB is having prefix "zen_". Will this be causing inconvenience ? please point me out a starting point on this. Thanks
I have sorted this my own. What surprised me is, there are no valid doc on how to import customers from ZenCart/OsCommerce to Magento any where around web unless some forum with premium Magento extensions. Hence I am posting my solution here which would be a help for any one who is looking for this same solution.
Fetching ZenCart customer details
Obviously ZenCart uses MySQL DB. Get into MySQL prompt and look into the customers table. Use the below command respectively.
use ZenCart_DB
mysql> select * from customers\G
You can see the details of your ZenCart customers there. Sample output will be like this per customer.
customers_id: 1298
customers_gender: m
customers_firstname: firstname
customers_lastname: Lastname
customers_dob:
customers_email_address: customer#email.com
customers_nick:
customers_default_address_id:
customers_telephone: 12345678
customers_fax:
customers_password: dd2df54a57a4d35ffd2985b3584f0831:2c
customers_newsletter: 0
customers_group_pricing: 0
customers_email_format: TEXT
customers_authorization: 0
customers_referral:
customers_paypal_payerid:
customers_paypal_ec: 0
COWOA_account: 0
This is a sample and we have to take all these customer details to a unix file.
select * from customers into outfile 'customer.txt'\G
Now come out of MySQL prompt. To create a Magento user, we only need firstname,lastname,email and password. These four details are mandatory. Hence grep those details from the customer.txt file.
The location of customer.txt file will be /var/lib/mysql/ZenCart_DB/customer.txt
Grep the required customer details to different individual files which will help us to put them into for loop later.
awk {'print $3,$4,$7,$10'} customer.txt > details.txt
awk {'print$1'} details.txt > zen_firstname
awk {'print$2'} details.txt > zen_secondname
awk {'print$3'} details.txt > zen_email
awk {'print$4'} details.txt > zen_password
Creating a Magento user
No we have collected all the details. Now to create a test Magento user from backend, we have to run 5 MySQL queries in magento database. They are,
INSERT INTO customer_entity ( entity_id, entity_type_id, attribute_set_id, website_id, email, group_id, increment_id, store_id, created_at, updated_at, is_active, disable_auto_group_change) VALUES ( 1, 1, 0, 1, $email, 1, NULL, 4, 2014-11-24 11:50:33, 2014-11-24 12:05:53, 1, 0)
INSERT INTO customer_entity_varchar (value_id, entity_type_id, attribute_id, entity_id, value) VALUES (1, 1, 5, 1, $firstname);
INSERT INTO customer_entity_varchar (value_id, entity_type_id, attribute_id, entity_id, value) VALUES (2, 1, 7, 1, '$lastname' );
INSERT INTO customer_entity_varchar (value_id, entity_type_id, attribute_id, entity_id, value) VALUES (3, 1, 12, 1, '$password' );
INSERT INTO customer_entity_varchar (value_id, entity_type_id, attribute_id, entity_id, value) VALUES (5, 1, 3, 1, 'English' );
If you have too many customers to add, then better put them in for loop. This is optional for each person. we can create for loop in bash or similar in Perl or what ever language you are good at.
Few things to note,
entity_id value is important and it should be the same in all the 5 queries. It determines the customer's ID number.
value_id is like serial number to the rows in table customer_entity_varchar. Follow the sequence as it is.
attribute_id should be kept as it is in the sequence 5,7,12,3 as it represents customers firstname, lastname, password, language respectively.
That's all I think. Thanks. :)
Related
I want to put the value of checked name from a checkbox in oracle apex into a table, but unable to do that.
I tried taking help from google but the steps mentioned didn't work too.
Could you please advise how to do that if I have taken a blank form and then added a checkbox to it, also I fetched the checkbox value from LOV.
As I understand, you are using a List of Values as a source of your checkboxes. Let's say you have following values there:
return value display value
----------------------------
123 andAND
456 Dibya
789 Anshul
321 aafirst
555 Anuj
When you select several values, APEX puts their return values into a string and separate them with :. So for the case in your screenshot the value in item P12_NEW will be 123:555. To split this values you can use the following query:
select regexp_substr(:P12_NEW, '[^:]+', 1, level) values
from dual
connect by regexp_substr(:P12_NEW, '[^:]+', 1, level) is not null
The result will be:
values
------
123
555
Next, you need to put these values into table usr_amt (let's say with columns user_id and amount, and the amount is entered into item P12_AMOUNT):
merge into usr_amt t
using (select regexp_substr(:P12_NEW, '[^:]+', 1, level) user_id
from dual
connect by regexp_substr(:P12_NEW, '[^:]+', 1, level) is not null
) n on (n.user_id = t.user_id)
when matched then update
set t.amount = :P12_AMOUNT
when not matched then insert (user_id, amount)
values (n.user_id, :P12_AMOUNT)
This query will search for the each user_id selected in the table, and if the user presents there, updates the corresponding value to the value of item :P12_AMOUNT, if not present - inserts a row with user_id and the value.
I'm sorry but I don't know how to explain exactly what I'm asking with words, so here's an example:
http://sqlfiddle.com/#!15/2564a/1/1
create table section(id serial primary key, name text not null);
create table book(id serial primary key, name text not null,
section_id integer not null references section(id));
create table author(id serial primary key, name text not null);
create table author_books(
author_id integer not null references author(id),
book_id integer not null references book(id),
unique(author_id, book_id)
);
create index on book(name);
create index on book(section_id);
create index on author(name);
create index on author_books(author_id, book_id);
insert into section(name) values ('Romance'), ('Terror');
insert into book(name, section_id) values ('Wonderful World', 1), ('Terrible World', 1), ('Simple World', 1), ('Irrelevant', 2);
insert into author(name) values ('Jill'), ('Mark'), ('Tim');
insert into author_books values (1, 1), (2, 1), (3, 1), (1, 2), (3, 2), (3, 3), (3, 4);
select b.section_id, b.name, a.name from book b
join author_books ab on b.id=ab.book_id
join author a on a.id=ab.author_id;
select distinct s.name from section s
join book b on b.section_id=s.id
join author_books ab on b.id=ab.book_id
join author a on a.id=ab.author_id
where a.name in ('Jill', 'Tim')
group by s.id
having count(distinct a.name) >= 2;
This query brings the expected result, however I'm interested in knowing whether it's possible to change it to perform better somehow. For example, it's not clear to me what PostgreSQL will do in this case. For example, after evaluating the first book in Romance section that matches the criteria ideally it should skip the processing for any other books in the Romance section to speed up the query execution. Also, as soon as it finds Jill and Tim authors it should probably stop processing the other author checks since it already has the count(distinct a.name) >= 2 condition met.
Is there any way to help PG to apply such optimizations with changes in the query?
Just to be clear, the query's intention is to find all sections where there's at least one book written by both Jill and Tim authors at least.
The sort of nitpicky optimizations you're talking about are the sort of thing the query engine is meant to keep out of your hands. It's helpful to think of databases as operating on sets of rows rather than as inspecting results row by row: the engine retrieves all section rows, generates the product of section with book, discards all rows that fail to satisfy the JOIN predicate, and so on. The query planner can optimize things ahead of time by switching the order of operations around to minimize the number of rows it has to deal with overall, but it's not going to stop in the middle.
Indexing your foreign keys will help; indexing author.name will help; indexing section.name is probably pointless.
You could also create a materialized view from the query, if performance is more important than the results always being current.
I need to export a list of all orders between dates X & Y that shows the following:
Order ID
State Shipping
Zip Shipped
Sales Tax Collected
Is there an easy query I can run to pull this information from the orders table?
The current X is January 1, 2015; the current Y is March 31, 2015.
I really only need orders shipped TO California (the only state we charge tax), but can filter this out through sorting the exported CSV list later.
Thank you!
You need two tables to get your data, here is the SQL :
SELECT a.increment_id AS 'Order ID', b.region AS 'State Shipping', b.postcode AS 'Zip Shipped', a.base_tax_amount AS 'Sales Tax Collected'
FROM sales_flat_order a
JOIN sales_flat_order_address b
ON a.entity_id = b.parent_id
WHERE a.created_at >= '2015-01-01 00:00:00' AND a.created_at <= '2015-03-31 23:59:59'
GROUP BY a.entity_id
few things need be care:
tax in sales_flat_order table has many fields, I am not sure this is what you looking for
the create_at value you might want to change. In my case, my Magento order created time value is faster 11 hours than my computer time, maybe the timezone issue.
the 'GROUP BY' is for get rid of duplicate rows after select the data from two tables.
Below query will help you:-
You can implement where clause as per your requirement.
SELECT increment_id AS `Order Id` , address.region AS `state` , address.postcode AS `zipcode` , order.base_subtotal_incl_tax AS `tax` FROM sales_flat_order `order` JOIN sales_flat_order_address `address` ON order.entity_id = address.parent_id
I am developing a report which should display the data horizontally.
What must be shown is the following:
email#1.com 12/09/2013 11/09/2013 10/09/2013 09/09/2013...
email#2.com 22/03/2013 21/03/2013 12/02/2013 02/01/2013...
Well, I have these data organized in two tables:
Member and Report.
The Member table has the email address and the Report table has dates, and each email can have many different dates.
I can easily retrieve that information vertically:
SELECT M.EMAIL, R.LAST_OPEN_DATE
FROM MEMBER M, REPORT R
WHERE M.MEMBER_ID = R.MEMBER_ID
AND R.STATUS = 1
AND TRUNC(R.LAST_OPEN_DATE) >= TRUNC(SYSDATE) - 120;
However to show the results horizontally is complicated, anyone have a tip or know how I can do this?
I'm using Oracle 11g.
You can get the dates into columns with pivot:
SELECT *
FROM (
SELECT M.EMAIL, R.LAST_OPEN_DATE,
ROW_NUMBER() OVER (PARTITION BY M.MEMBER_ID
ORDER BY R.LAST_OPEN_DATE DESC) AS RN
FROM MEMBER M, REPORT R
WHERE M.MEMBER_ID = R.MEMBER_ID
AND R.STATUS = 1
AND TRUNC(R.LAST_OPEN_DATE) >= TRUNC(SYSDATE) - 120
)
PIVOT (MIN(LAST_OPEN_DATE) FOR (RN) IN (1, 2, 3, 4, 5, 6, 7, 8));
SQL Fiddle.
Essentially this is assigning a number to each report date for each member, and then the pivot is based on that ranking number.
But you'd need to have each of the possible number of days listed; if you can have up to 240 report dates, the PIVOT IN clause would need to be every number up to 240, i.e. IN (1, 2, 3, ..., 239, 240), not just up to eight as in that Fiddle.
If you ever had a member with more than 240 dates you wouldn't see some of them, so whatever high number you pick would have to be high enough to cover every possibility, now and in the foreseeable future. As your query is limited to 120 days, even 240 seems quite high, but perhaps you have more than one per day - in which case there is no real upper limit.
You could potentially have to format each date column individually, but hopefully your reporting layer is taking care of that.
If you just wanted to perform string aggregation using the multiple dates for each email, in you could do this in 11g:
SELECT M.EMAIL,
LISTAGG(TO_CHAR(R.LAST_OPEN_DATE, 'DD/MM/YYYY'), ' ')
WITHIN GROUP (ORDER BY R.LAST_OPEN_DATE DESC)
FROM MEMBER M, REPORT R
WHERE M.MEMBER_ID = R.MEMBER_ID
AND R.STATUS = 1
AND TRUNC(R.LAST_OPEN_DATE) >= TRUNC(SYSDATE) - 120
GROUP BY M.EMAIL;
EMAIL DATES
-------------------- -------------------------------------------
email#1.com 12/04/2014 11/04/2014 10/04/2014 09/04/2014
email#2.com 12/05/2014 02/04/2014 22/03/2014 21/03/2014
SQL Fiddle.
Which is OK for a text report, but not if this query is feeding into a reporting tool.
First of all, number of columns in a query is determined before hand and can't be adjusted by the data. To overcome that, you might be interested in dynamic query
But, in simple static case, you will need to use PIVOT construction.
As a first step, you will need to assign rows to the columns
select EMAIL, row_number() over (partition by email order by last_date) col
from yourtable
then you add "magic" PIVOT:
<your query>
PIVOT
(
max(last_date)
for col in (1, 2, 3, ..., 240)
)
I have a database built in Ruby using SQLite in the following way:
db.execute "CREATE TABLE IF NOT EXISTS Problems(ID INTEGER, stem BLOB NOT NULL, answer BLOB, datetime TEXT, lastmodifiedby TEXT, primary key (ID, datetime) )"
db.execute "INSERT INTO Problems VALUES(1, 'stem', 'answer', '12/26/2012 2:52:18 PM', 'bob')"
db.execute "INSERT INTO Problems VALUES(1, 'stem modified', 'answer', '12/26/2012 2:52:19 PM', 'bob')"
db.execute "INSERT INTO Problems VALUES(1, 'stem modified further', 'answer', '12/26/2012 2:52:20 PM', 'bob')"
The IDs for the first three entries are the same, however the times are different. I am currently using the following code to extract a single entry:
db = SQLite3::Database.new "#{dbname}"
stm = db.prepare "SELECT * FROM Problems WHERE ID=?"
stm.bind_param 1, id
rs = stm.execute
problem = rs.next
My first question - is there a way to condense the last 4 lines of code?
Second, when I select an entry from the Problems database, how would I add an option so that the most recent entry (in this case, the third one) is chosen?
And finally, how do I go about selecting all entries of a certain ID (here I only have the int 1, but in reality there are many others) so that I can output them as a string / write to a file, etc.
I have found answers to questions regarding most recent entry selection, but they seem quite complex. Would an ORDER BY work in some way?
Thanks for the help.
First of all, I think you have a data format problem. I don't think SQLite will understand '12/26/2012 2:52:18 PM' as a timestamp so you'll end up comparing your timestamps as strings. For example, if I add '12/26/2012 2:52:20 AM' to the mix, I get '12/26/2012 2:52:18 PM' and '12/26/2012 2:52:20 PM' as the lowest and highest values and that only makes sense if they're being compared as strings. Switch your data to ISO 8601 format so that you have these:
2012-12-26 14:52:18
2012-12-26 14:52:19
2012-12-26 14:52:20
and things will sort properly.
Once you have that fixed, you can use ORDER BY and LIMIT to peel off just one record:
stm = db.prepare('select * from Problems where ID = ? order by datetime desc limit 1')
rs = stm.execute(1)
problem = rs.next