clickhouse occurred Code:999 where optimize MATERIALIZED VIEW table with ReplicatedReplacingMergeTree engine - clickhouse

It's 2 shared 2 replicas clickhouse cluster ,It's 4 clickhouse nodes
where I optimize table in one node , occurred error as following:
but it's normal where execute on any other clickhouse nodes。
risk-luck2.dg.163.org :) optimize table risk_detect_test.risk_doubtful_user_daily_device_view_lyp;
OPTIMIZE TABLE risk_detect_test.risk_doubtful_user_daily_device_view_lyp
Received exception from server (version 20.4.4):
Code: 999. DB::Exception: Received from localhost:9000. DB::Exception: Can't get data for node /clickhouse/tables/test/01-02/risk_doubtful_user_daily_device_view_lyp/replicas/risk-olap6.dg.163.org (multiple leaders Ok)/host: node doesn't exist (No node).
0 rows in set. Elapsed: 0.002 sec.
risk-luck2.dg.163.org :) show create table risk_detect_test.risk_doubtful_user_daily_device_view_lyp;
SHOW CREATE TABLE risk_detect_test.risk_doubtful_user_daily_device_view_lyp
┌─statement──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ CREATE MATERIALIZED VIEW risk_detect_test.risk_doubtful_user_daily_device_view_lyp
(
`app_id` String,
`event_date` Date,
`device_id` UInt32
)
ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/test/{layer}-{shard}/risk_doubtful_user_daily_device_view_lyp', '{replica}')
PARTITION BY toYYYYMM(event_date)
PRIMARY KEY app_id
ORDER BY (app_id, event_date, device_id)
SETTINGS index_granularity = 8192 AS
SELECT
app_id,
event_date,
xxHash32(device_id) AS device_id
FROM risk_detect_online.dwd_risk_doubtful_detail │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

It seems it's another bug in CH
ENGINE = ReplicatedReplacingMergeTree(
'/clickhouse/tables/test/{layer}-{shard}/risk_doubtful_user_daily_device_view_lyp', '{replica}')
Can't get data for node
/clickhouse/tables/online/01-02/risk_doubtful_user_daily_device_view/replicas/risk-olap6.dg.163.org
CH tries to use incorrect Zookeeper path in case of Mat.View.
risk_doubtful_user_daily_device_view instead of risk_doubtful_user_daily_device_view_lyp.
Database also is incorrect tables/online/01-02/ /tables/test/{layer}-{shard}/
I suggest you to switch to "TO" notation. https://den-crane.github.io/Everything_you_should_know_about_materialized_views_commented.pdf
Or run optimize against the inner table
OPTIMIZE TABLE "risk_detect_test"."inner.risk_doubtful_user_daily_device_view_lyp";

clickhouse-server.log as following:
2021.08.18 16:37:11.384434 [ 128614 ] {b6de1d84-a238-4e2f-9af4-3ce0ddf8551d} <Debug> executeQuery: (from 10.200.128.91:40236) insert into dwd_risk_detect_detail(app_id, app_type, app_version, city, created_at, defense_count, defense_result, detect_count, device_code, device_id, id, ip, model, os_version, package_name, phone_brand, platform, province, region, risk_type1, risk_type2, risk_type3, role_account, role_id, sdk_version, sign_hash, ts) FORMAT TabSeparated
2021.08.18 16:37:11.384735 [ 128614 ] {b6de1d84-a238-4e2f-9af4-3ce0ddf8551d} <Trace> ContextAccess (default): Access granted: INSERT(app_id, app_type, app_version, city, created_at, defense_count, defense_result, detect_count, device_code, device_id, id, ip, model, os_version, package_name, phone_brand, platform, province, region, risk_type1, risk_type2, risk_type3, role_account, role_id, sdk_version, sign_hash, ts) ON risk_detect_online.dwd_risk_detect_detail
2021.08.18 16:37:11.385706 [ 128614 ] {b6de1d84-a238-4e2f-9af4-3ce0ddf8551d} <Debug> InterpreterSelectQuery: MergeTreeWhereOptimizer: condition "risk_type1 != 0" moved to PREWHERE
2021.08.18 16:37:11.386554 [ 128614 ] {b6de1d84-a238-4e2f-9af4-3ce0ddf8551d} <Trace> ContextAccess (default): Access granted: SELECT(id, app_id, app_type, device_id, role_id, defense_result, risk_type1, risk_type2, risk_type3, defense_count, detect_count, event_date, event_hour, event_minute) ON risk_detect_online.dwd_risk_detect_detail
2021.08.18 16:37:11.386764 [ 128614 ] {b6de1d84-a238-4e2f-9af4-3ce0ddf8551d} <Trace> ContextAccess (default): Access granted: INSERT(app_id, app_type, event_date, event_hour, event_minute, risk_type1, risk_type2, risk_type3, defense_result, defense_count, detect_count, device_id, role_id, id) ON risk_detect_online.`.inner.risk_stat_view`
2021.08.18 16:37:11.387323 [ 128614 ] {b6de1d84-a238-4e2f-9af4-3ce0ddf8551d} <Trace> ContextAccess (default): Access granted: SELECT(app_id, app_type, device_id, role_id, event_date) ON risk_detect_online.dwd_risk_detect_detail
2021.08.18 16:37:11.387434 [ 128614 ] {b6de1d84-a238-4e2f-9af4-3ce0ddf8551d} <Trace> ContextAccess (default): Access granted: INSERT(app_id, app_type, event_date, device_id, role_id) ON risk_detect_online.`.inner.risk_total_user_stat_view`
2021.08.18 16:37:11.578506 [ 128861 ] {819b05a8-5ad0-414f-a0a7-111c765cac57} <Debug> executeQuery: (from 127.0.0.1:40932) OPTIMIZE TABLE risk_detect_online.risk_doubtful_user_daily_device_view
2021.08.18 16:37:11.578659 [ 128861 ] {819b05a8-5ad0-414f-a0a7-111c765cac57} <Trace> ContextAccess (default): Access granted: OPTIMIZE ON risk_detect_online.risk_doubtful_user_daily_device_view
2021.08.18 16:37:11.580097 [ 128861 ] {819b05a8-5ad0-414f-a0a7-111c765cac57} <Error> executeQuery: Code: 999, e.displayText() = Coordination::Exception: Can't get data for node /clickhouse/tables/online/01-02/risk_doubtful_user_daily_device_view/replicas/risk-olap6.dg.163.org (multiple leaders Ok)/host: node doesn't exist (No node) (version 20.4.4.18 (official build)) (from 127.0.0.1:40932) (in query: OPTIMIZE TABLE risk_detect_online.risk_doubtful_user_daily_device_view), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) # 0x104191d0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) # 0x8fff8ad in /usr/bin/clickhouse
2. Coordination::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int) # 0xdddf7d8 in /usr/bin/clickhouse
3. Coordination::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) # 0xdddfe02 in /usr/bin/clickhouse
4. ? # 0xddf1f60 in /usr/bin/clickhouse
5. DB::StorageReplicatedMergeTree::sendRequestToLeaderReplica(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&) # 0xd76117e in /usr/bin/clickhouse
6. DB::StorageReplicatedMergeTree::optimize(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::IAST> const&, bool, bool, DB::Context const&) # 0xd762546 in /usr/bin/clickhouse
7. DB::StorageMaterializedView::optimize(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::IAST> const&, bool, bool, DB::Context const&) # 0xd6d5a9d in /usr/bin/clickhouse
8. DB::InterpreterOptimizeQuery::execute() # 0xd225346 in /usr/bin/clickhouse
9. ? # 0xd5499f9 in /usr/bin/clickhouse
10. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) # 0xd54d025 in /usr/bin/clickhouse
11. DB::TCPHandler::runImpl() # 0x9106678 in /usr/bin/clickhouse
12. DB::TCPHandler::run() # 0x9107650 in /usr/bin/clickhouse
13. Poco::Net::TCPServerConnection::start() # 0x10304f4b in /usr/bin/clickhouse
14. Poco::Net::TCPServerDispatcher::run() # 0x103053db in /usr/bin/clickhouse
15. Poco::PooledThread::run() # 0x104b2fa6 in /usr/bin/clickhouse
16. Poco::ThreadImpl::runnableEntry(void*) # 0x104ae260 in /usr/bin/clickhouse
17. start_thread # 0x74a4 in /lib/x86_64-linux-gnu/libpthread-2.24.so
18. __clone # 0xe8d0f in /lib/x86_64-linux-gnu/libc-2.24.so
2021.08.18 16:37:11.580526 [ 128861 ] {819b05a8-5ad0-414f-a0a7-111c765cac57} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2021.08.18 16:37:11.580592 [ 128861 ] {} <Information> TCPHandler: Processed in 0.002 sec.

Related

Drop table fails with "Checksum doesn't match: corrupted data" exception on clickhouse

So our unit tests for clickhouse started failing. Fails on simple SQL:
::clickhouse::Client(client_options_).Execute("DROP TABLE IF EXISTS test.delme");
for client options I have host, default_database, user and password set.
the error:
[clickhouse error 40, DB::Exception: Checksum doesn't match: corrupted data. Reference: 8a58086e26544cb09217aa1bba09a1d9. Actual: 7c7a5cd56cac83a714e286dbbd46acb5. Size of compressed block: 20]
Errors on the server:
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) # 0xa38beba in /usr/bin/clickhouse
1. ? # 0x140ae996 in /usr/bin/clickhouse
2. DB::CompressedReadBufferBase::readCompressedData(unsigned long&, unsigned long&, bool) # 0x140ad956 in /usr/bin/clickhouse
3. ? # 0x140ace9f in /usr/bin/clickhouse
4. DB::NativeReader::read() # 0x15cf19c4 in /usr/bin/clickhouse
5. DB::TCPHandler::receiveData(bool) # 0x15ccb990 in /usr/bin/clickhouse
6. DB::TCPHandler::receivePacket() # 0x15cc0a4f in /usr/bin/clickhouse
7. DB::TCPHandler::readDataNext() # 0x15cc3c9f in /usr/bin/clickhouse
8. ? # 0x15cceb68 in /usr/bin/clickhouse
9. DB::Context::initializeExternalTablesIfSet() # 0x1474b5f6 in /usr/bin/clickhouse
10. ? # 0x14feb237 in /usr/bin/clickhouse
11. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) # 0x14fe9f0e in /usr/bin/clickhouse
12. DB::TCPHandler::runImpl() # 0x15cb97ad in /usr/bin/clickhouse
13. DB::TCPHandler::run() # 0x15ccdd59 in /usr/bin/clickhouse
14. Poco::Net::TCPServerConnection::start() # 0x18a617b3 in /usr/bin/clickhouse
15. Poco::Net::TCPServerDispatcher::run() # 0x18a62c2d in /usr/bin/clickhouse
16. Poco::PooledThread::run() # 0x18c2d9c9 in /usr/bin/clickhouse
17. Poco::ThreadImpl::runnableEntry(void*) # 0x18c2b242 in /usr/bin/clickhouse
18. ? # 0x7f4e74010609 in ?
19. __clone # 0x7f4e73f35133 in ?
table does not exist, so no idea what data is corrupted.
clickhouse version: 22.8.2.11 using c++ client (https://github.com/ClickHouse/clickhouse-cpp)
I will try to recreate database and user, but wondering what led to these errors.
I'm not able to comment, so I'll write an answer.
Have you tried dropping the database test?
Maybe check in table system.parts, whether there are parts for this table. If yes, drop them.
Best regards,
Albert
albert1.cornelius+stackoverflow#gmail.com

GraphQL - How to add/manipulate response columns

Is there a way to pass a static variable to a GraphQL endpoint in order to be returned with the response?
In my case I'm pulling timesheets for a specific userId. Unfortunately the userId isn't returned in the response.
Types
type Query {
# Returns a timesheet for user id and given date.
#
# Arguments
# userId: User's id.
# date: Date.
timesheet(userId: ID, date: Date!): Timesheet
}
type Timesheet {
# Id.
id: ID
# Date.
date: Date
# Expected time based on work schedule in seconds.
expectedTime: Int
# Tracked time in seconds.
trackedTime: Int
# Time off time in seconds.
timeOffTime: Int
# Holiday time in seconds.
holidayTime: Int
# Sum of tracked, holiday, time off time minus break time.
totalTime: Int
# Break time in seconds.
breakTime: Int
}
Request Body
{"query":"{
timesheets(userId: \"10608\", dateFrom: \"2022-01-01\", dateTo: \"2022-12-31\") {
items { id date trackedTime timeOffTime holidayTime totalTime breakTime }
}
}"}
Example Response
data.timesheets.items.id
data.timesheets.items.date
data.timesheets.items.trackedTime
data.timesheets.items.timeOffTime
data.timesheets.items.holidayTime
data.timesheets.items.totalTime
data.timesheets.items.breakTime
3646982
2022-01-01
0
0
0
0
3495676
2022-01-02
18000
0
0
18000
3500068
2022-01-03
35100
0
0
35100
Desired Response
userId
data.timesheets.items.id
data.timesheets.items.date
data.timesheets.items.trackedTime
data.timesheets.items.timeOffTime
data.timesheets.items.holidayTime
data.timesheets.items.totalTime
data.timesheets.items.breakTime
10608
3646982
2022-01-01
0
0
0
0
10608
3495676
2022-01-02
18000
0
0
18000
10608
3500068
2022-01-03
35100
0
0
35100

Ran out of memory searching text in ClickHouse

I'm investigating whether ClickHouse is a good option for OLAP purposes. To do so, I replicated some queries I have running on PostgreSQL, using ClickHouse's sintax.
All the queries I have ran are much faster than Postgres', but the ones that perform text search run out of memory. Below is the error code and the stack trace.
clickhouse_driver.errors.ServerException: Code: 241. DB::Exception:
Memory limit (for query) exceeded: would use 9.31 GiB (attempt to
allocate chunk of 524288 bytes), maximum: 9.31 GiB.
The script for the query is:
SELECT COUNT(*)
FROM ObserverNodeOccurrence as occ
LEFT JOIN
ObserverNodeOccurrence_NodeElements as occ_ne
ON occ._id = occ_ne.occurrenceId
WHERE
occ_ne.snippet LIKE '<img>'
The query above counts the number of entries of the column snippet which contain an HTML image (<img>). This column contains HTML snippets, hence searching text becomes quite expensive. A close/mid term goal is to parse this column and convert it into a set of other columns (e.g. contains_img, contains_script, etc.). But, for now, I would like to be able to run such query without running out of memory.
My question(s) is(are):
how can I successfully execute text-search queries on such column without running out of memory?
Is there a way to force the query planner to use disk as soon as it runs out of memory?
I am using MergeTree engine. Is there another engine that's able to split the load between ram and disk?
Full stack trace:
clickhouse_driver.errors.ServerException: Code: 241.
DB::Exception: Memory limit (for query) exceeded: would use 9.31 GiB (attempt to allocate chunk of 524288 bytes), maximum: 9.31 GiB. Stack trace:
0. /usr/bin/clickhouse-server(StackTrace::StackTrace()+0x22) [0x781c272]
1. /usr/bin/clickhouse-server(MemoryTracker::alloc(long)+0x8ba) [0x71bbb4a]
2. /usr/bin/clickhouse-server(MemoryTracker::alloc(long)+0xc5) [0x71bb355]
3. /usr/bin/clickhouse-server() [0x67aeb4e]
4. /usr/bin/clickhouse-server() [0x67af010]
5. /usr/bin/clickhouse-server() [0x67e5af4]
6. /usr/bin/clickhouse-server(void DB::Join::joinBlockImpl<(DB::ASTTableJoin::Kind)1, (DB::ASTTableJoin::Strictness)2, DB::Join::MapsTemplate<DB::JoinStuff::WithFlags<DB::RowRefList, false> > >(DB::Block&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, DB::NamesAndTypesList const&, DB::Block const&, DB::Join::MapsTemplate<DB::JoinStuff::WithFlags<DB::RowRefList, false> > const&) const+0xe1c) [0x68020dc]
7. /usr/bin/clickhouse-server(DB::Join::joinBlock(DB::Block&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, DB::NamesAndTypesList const&) const+0x1a5) [0x67bc415]
8. /usr/bin/clickhouse-server(DB::ExpressionAction::execute(DB::Block&, bool) const+0xa5d) [0x6d961dd]
9. /usr/bin/clickhouse-server(DB::ExpressionActions::execute(DB::Block&, bool) const+0x45) [0x6d97545]
10. /usr/bin/clickhouse-server(DB::ExpressionBlockInputStream::readImpl()+0x48) [0x6c52888]
11. /usr/bin/clickhouse-server(DB::IBlockInputStream::read()+0x188) [0x6635628]
12. /usr/bin/clickhouse-server(DB::FilterBlockInputStream::readImpl()+0xd9) [0x6c538b9]
13. /usr/bin/clickhouse-server(DB::IBlockInputStream::read()+0x188) [0x6635628]
14. /usr/bin/clickhouse-server(DB::ExpressionBlockInputStream::readImpl()+0x2d) [0x6c5286d]
15. /usr/bin/clickhouse-server(DB::IBlockInputStream::read()+0x188) [0x6635628]
16. /usr/bin/clickhouse-server(DB::ParallelInputsProcessor<DB::ParallelAggregatingBlockInputStream::Handler>::loop(unsigned long)+0x139) [0x6c7f409]
17. /usr/bin/clickhouse-server(DB::ParallelInputsProcessor<DB::ParallelAggregatingBlockInputStream::Handler>::thread(std::shared_ptr<DB::ThreadGroupStatus>, unsigned long)+0x209) [0x6c7fc79]
18. /usr/bin/clickhouse-server(ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::ParallelInputsProcessor<DB::ParallelAggregatingBlockInputStream::Handler>::*)(std::shared_ptr<DB::ThreadGroupStatus>, unsigned long), DB::ParallelInputsProcessor<DB::ParallelAggregatingBlockInputStream::Handler>*, std::shared_ptr<DB::ThreadGroupStatus>, unsigned long&>(void (DB::ParallelInputsProcessor<DB::ParallelAggregatingBlockInputStream::Handler>::*&&)(std::shared_ptr<DB::ThreadGroupStatus>, unsigned long), DB::ParallelInputsProcessor<DB::ParallelAggregatingBlockInputStream::Handler>*&&, std::shared_ptr<DB::ThreadGroupStatus>&&, unsigned long&)::{lambda()#1}::operator()() const+0x7f) [0x6c801cf]
19. /usr/bin/clickhouse-server(ThreadPoolImpl<std::thread>::worker(std::_List_iterator<std::thread>)+0x1af) [0x71c778f]
20. /usr/bin/clickhouse-server() [0xb2ac5bf]
21. /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7fc5b50826db]
22. /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7fc5b480988f]
Run Clickhouse-Client in terminal
set max_bytes_before_external_group_by=20000000000; --20 GB for external group by
set max_memory_usage=40000000000; --40GB for memory limit

how to match this pattern using regex

I am new to ruby and trying to use regular expression.
Basically I want to read a file and check if it has the right format.
Requirements to be in the correct format:
1: The word should start with from
2: There should be one space and only one space is allowed, unless there is a comma
3: Not consecutive commas
4: from and to are numbers
5: from and to must contain a colon
from: z to: 2
from: 1 to: 3,4
from: 2 to: 3
from:3 to: 5
from: 4 to: 5
from: 4 to: 7
to: 7 from: 6
from: 7 to: 5
0: 7 to: 5
from: 24 to: 5
from: 7 to: ,,,5
from: 8 to: 5,,5
from: 9 to: ,5
If I have the correct regular expression, then the output should be:
from: 1 to: 3,4
from: 2 to: 3
from: 4 to: 5
from: 4 to: 7
from: 7 to: 5
from: 24 to: 5
so in this case these are the false ones:
from: z to: 2 # because starts with z
from:3 to: 5 # because there is no space after from:
to: 7 from: 6 # because it starts with to but supposed to start with from
0: 7 to: 5 # starts with 0 instead of from
from: 7 to: ,,,5 # because there are two consecutive commas
from: 8 to: 5,,5 # two consecutive commas
from: 9 to: ,5 # start with comma
OK, the regex you want is something like this:
from: \d+(?:,\d+)* to: \d+(?:,\d+)*
This assumes that multiple numbers are permitted in the from: column as well. If not, you want this one:
from: \d+ to: \d+(?:,\d+)*
To verify that the whole file is valid (assuming all it contains are lines like this one), you could use a function like this:
def validFile(filename)
File.open(filename).each do |line|
return false if (!/\d+(?:,\d+)* to: \d+(?:,\d+)*/.match(line))
end
return true
end
What you are looking for is called negative lookahead. Specifically, \d+(?!,,) which says: match 1 or more consecutive digits not followed by 2 commas. Here is the whole thing:
str = "from: z to: 2
from: 1 to: 3,4
from: 2 to: 3
from:3 to: 5
from: 4 to: 5
from: 4 to: 7
to: 7 from: 6
from: 7 to: 5
0: 7 to: 5
from: 24 to: 5
from: 7 to: ,,,5
from: 8 to: 5,,5
from: 9 to: ,5
"
str.each_line do |line|
puts(line) if line =~ /\Afrom: \d+ to: \d+(?!,,)/
end
Output:
from: 1 to: 3,4
from: 2 to: 3
from: 4 to: 5
from: 4 to: 7
from: 7 to: 5
from: 24 to: 5

product are not being assigned to categories via magmi

I am using magmi to upload the product it is working fine product are being uploaded.
Only one problem they are not showing up at front end but they are showing up in admin
when i try to find the reason I find that products are not being assigned to any category when i did that manually they are showing up at fronted.
Any body can help ?
Here is a sample of my CSV
sku _store _attribute_set _type _category _root_category _product_websites ada_compliant backplate_dimension base_dimension brand bulb_included bulb_type bulb_wattage canopy_dimension carton_height carton_length carton_width collection1 cost country_of_manufacture country_orgin created_at custom_design custom_design_from custom_design_to custom_layout_update depth description designer diameter dimension enable_googlecheckout energy extension finish finish1 gallery gender gift_message_available harddrive_speed hardrive has_options height height_1 image image_label in_depth lamping length manufacturer1 max_resolution media_gallery megapixels memory meta_description meta_keyword meta_title minimal_price model msrp msrp_display_actual_price_type msrp_enabled name news_from_date news_to_date no_bulbs options_container page_layout price processor ram_size required_options response_time room screensize shade_color shade_dimension shade_material shape shirt_size shoe_size shoe_type short_description small_image small_image_label special_from_date special_price special_to_date status style switch tax_class_id thumbnail thumbnail_label updated_at url_key url_path visibility weight width qty min_qty use_config_min_qty is_qty_decimal backorders use_config_backorders min_sale_qty use_config_min_sale_qty max_sale_qty use_config_max_sale_qty is_in_stock notify_stock_qty use_config_notify_stock_qty manage_stock use_config_manage_stock stock_status_changed_auto use_config_qty_increments qty_increments use_config_enable_qty_inc enable_qty_increments is_decimal_divided _links_related_sku _links_related_position _links_crosssell_sku _links_crosssell_position _links_upsell_sku _links_upsell_position _associated_sku _associated_default_qty _associated_position _tier_price_website _tier_price_customer_group _tier_price_qty _tier_price_price _group_price_website _group_price_customer_group _group_price_price _media_attribute_id _media_image _media_lable _media_position _media_is_disabled
EP777777-81 admin Default simple Wall Lights/Wall Sconces base No Maxim Lighting No Medium base bulbs 100 29.72 33.66 10.43 Basix 170 Contemporary collection with sweeping arms and clean lines. Offered in Ice glass and Satin Nickel finish or Wilshire glass and Oil Rubbed Bronze finish. Maxim Lighting 31.5 H x 32 W x L Dry Locations Satin Nickel 1 31.5 /10001CLPC.jpg Basix 9-Light Chandelier Maxim Lighting 0 Basix 9-Light Chandelier7777 Ceiling Lights, Chandeliers, lighting, lights, Maxim Lighting Maxim Lighting Basix 9-Light Chandelier $510.00 Basix 9-Light Chandelier9999 9 255 Basix 9-Light Chandelier /10001CLPC.jpg Basix 9-Light Chandelier 1 Contemporary 2 /10001CLPC.jpg Basix 9-Light Chandelier 4 26 32 10 0 1 0 0 1 1 1 100 1 1 1 0 1 0 1 0 1 0 0
I had the same problem.
BUT i used the magmi classes into an personal project, not using the magmi interface.
I solved my problem by adding in the product data array the "category_ids"=> "2" . This is the id of the category.
It may not help you, but may help others.
Looking at your CSV, some of your column names are incorrect (including Category)
For example:
_store _attribute_set _type _category _product_websites
Should be:
store attribute_set type category websites
You can see the required column names at Magmi: Import New Products
Also ensure you have the On the fly category creator/importer plugin enabled in your Magmi configuration, and that your Category column values follow the format outlined in the documentation.
Try naming the field "categories" instead of "category"

Resources