MariaDB doesnt see mac desktop files through terminal - xampp

When I try to insert a file file into MariaDB through the terminal in XAMPP, if the file is on my Desktop, I get an error :
LOAD DATA INFILE '/Users/buzz/Desktop/FEM/data/test_1.csv'
INTO TABLE food2_test
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
IGNORE 1 ROWS (food, calories, carbs, potassium, protein, fiber, unit, notes)
;
ERROR 13 (HY000): Can't get stat of '/Users/buzz/Desktop/FEM/data/test_1.csv' (Errcode: 2 "No such file or directory")
It is only if I put the file into the shared drive which appears when I mount XAMPP, that XAMPP can access it.
LOAD DATA local INFILE '/opt/lampp/htdocs/test_1.csv'
INTO TABLE food2_test
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
IGNORE 1 ROWS (food, calories, carbs, potassium, protein, fiber, unit, notes)
;
Why? How can I use files from my desktop? Also, why do I have to mount XAMPP as an external drive? Why wouldn't the files be in my mac's local drive?
Thanks

Related

Getting NULL values after loading data into Hive tables from an online dataset

I am trying to load a data from an online dataset into my hive table using hue interface but I am getting NULL values.
Here's my dataset:
https://www.kaggle.com/psparks/instacart-market-basket-analysis?select=aisles.csv
Here's my code:
CREATE TABLE IF NOT EXISTS AISLES (aisles_id INT, aisles STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
tblproperties("skip.header.line.count"="1");
Here's how I loaded the data:
LOAD DATA LOCAL INPATH '/home/hadoop/aisles.csv' INTO TABLE aisles;
My Workaround, but no go:
FIELDS TERMINATED BY ','
FIELDS TERMINATED BY '\t'
FIELDS TERMINATED BY ''
FIELDS TERMINATED BY ' '
Also tried removing LINES TERMINATED BY '\n'
This is how I downloaded the data:
[hadoop#ip-172-31-76-58 ~]$ wget -O aisles.csv "https://www.kaggle.com/psparks/instacart-market-basket-analysis?select=aisles.csv"
--2020-10-14 23:50:06-- https://www.kaggle.com/psparks/instacart-market-basket-analysis?select=aisles.csv
Resolving www.kaggle.com (www.kaggle.com)... 35.244.233.98
Connecting to www.kaggle.com (www.kaggle.com)|35.244.233.98|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘aisles.csv’
I checked the location of the table I created this is what it says;
hdfs://ip-172-31-76-58.ec2.internal:8020/user/hive/warehouse/aisles
I tried browsing the directory and see where the file was saved:
[hadoop#ip-172-31-76-58 ~]$ hdfs dfs -ls /user/hive/warehouse
Found 1 items
drwxrwxrwt - arjiesaenz hadoop 0 2020-10-15 00:57 /user/hive/warehouse/aisles
So, I tried to change my load script like this;
LOAD DATA INPATH '/user/hive/warehouse/aisles.csv' INTO TABLE aisles;
But I got an error:
Error while compiling statement: FAILED: SemanticException line 6:61 Invalid path ''/user/hive/warehouse/aisles.csv'': No files matching path hdfs://ip-172-31-76-58.ec2.internal:8020/user/hive/warehouse/aisles.csv
Hopefully someone can help me pinpoint the problem with my code.
Thanks.
I tried the same on my hadoop cluster. The code worked without any issues.
Here's my execution snippet:
hive> CREATE TABLE IF NOT EXISTS AISLES (aisles_id INT, aisles STRING)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ','
> LINES TERMINATED BY '\n'
> STORED AS TEXTFILE
> tblproperties("skip.header.line.count"="1");
OK
Time taken: 0.034 seconds
hive> load data inpath '/user/hirwuser1448/aisles.csv' into table AISLES;
Loading data to table revisit.aisles
Table revisit.aisles stats: [numFiles=1, totalSize=2603]
OK
Time taken: 0.183 seconds
hive> select * from AISLES limit 10;
OK
1 prepared soups salads
2 specialty cheeses
3 energy granola bars
4 instant foods
5 marinades meat preparation
6 other
7 packaged meat
8 bakery desserts
9 pasta sauce
10 kitchen supplies
Time taken: 0.038 seconds, Fetched: 10 row(s)
I think you need to cross check if your dataset aisles.csv is at the hdfs location and not stored at local directory.
The problem is with your load cmd.
LOAD DATA INPATH '/user/hive/warehouse/aisles.csv' INTO TABLE aisles;
I see you tried browsing the dir to see the saved file. Do you see aisles.csv under that dir? If the file's there, then you're giving wrong path in your load cmd else file isn't there at all.
I found a workaround by downloading the dataset and uploaded it into the Amazon S3 bucket and used the S3 path in the LOAD command.

MYSQL bulk insert - Linux

I am trying to load the text file in MYSQL but I got below error.
Error Code: 1064
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Rank=#Rank' at line 7
LOAD DATA LOCAL INFILE 'F:/keyword/Key_2018-10-06_06-44-09.txt'
INTO TABLE table
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\r\n'
IGNORE 0 LINES
(#dump_date,#Rank)
SET dump_date=#dump_date,Rank=#Rank;
But the above query working in windows server. And same time not working in Linux server .
I am going to suggest here that you try executing that command from the command line in a single line:
LOAD DATA LOCAL INFILE 'F:/keyword/Key_2018-10-06_06-44-09.txt' INTO TABLE
table FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\r\n' IGNORE 0 LINES
(#dump_date,#Rank) SET dump_date=#dump_date,Rank=#Rank;
For formatting reasons, I have added newlines above, but don't do that when you run it from the Linux prompt, just use a single line. Anyway, the text should nicely wrap around when you type it.

Greenplum loading data from table to file using external table

I ran the below create script and it created the table:-
Create writable external table FLTR (like dbname.FLTR)
LOCATION ('gpfdist://172.90.38.190:8081/fltr.out')
FORMAT 'CSV' (DELIMITER ',' NULL '')
DISTRIBUTED BY (fltr_key);
But when I tried inserting into the file like insert into fltr.out select * from dbname.fltr
I got the below error, cannot find server connection.
Please help me out
I think your gpfdist is probably not running try:
gpfdist -p 8081 -l ~/gpfdist.log -d ~/ &
on 172.90.38.190.
This will start gpfidist using your home directory as the data directory.
When I do that my inserts work and create a file ~/fltr.out

got error 22 from storage engine mysql

mysqldump: Error: 'got error 22 from storage engine' when trying to dump
tablespaces
mysqldump: Got error: 23: Out of resources when opening file '.\database\table.MYD' (Errcode: 24) when using LOCK TABLES
i got this error when trying to make a dump in any database that I select , looks like that database is corrupted , is possible repair that ?
You seem to have reached the maximum number of open files. This limit is either MySQL's or the system's.
increase the value for the open_files_limit in your MySQL configuration file (this directive does not exist in a default installation, so you might need to create it in the [mysqld] section)
increase the limit at system level (but I am not sure this applies to Windows)
Here are some reasons for this error:
Type “source path-to-SQL-file“. BUT, you must follow these rules:
Use the full source command, not the . shortcut.
Have no spaces in your path. I copied mine to a root of a drive. Note that spaces in the file name is OK, just not the path.
Do not quote the file name, even if it has spaces. This gave error 22.
Use forward slashes in the path, e.g., C:/path/to/filename.sql. Otherwise you’ll get error 2.
Do not end with a semicolon.
Please check your read write access to the drive where you have stored your mySQL database.
error 22 occurred usually when you have no write access to that drive.

sqlite field separator for importing

I Just started using SQLite for our log processing system where I just import a file in to sqlite database which has '#' as field separator.
If I run the following in SQLite repl
$ sqlite3 log.db
sqlite> .separator "#"
sqlite> .import output log_dump
It works [import was successful]. But if I try to do the same via a bash script
sqlite log.db '.separator "#"'
sqlite log.db '.import output log_dump'
it doesn't. The separator shifts back to '|' and I'm getting an error saying that there are insufficient columns
output line 1: expected 12 columns of data but found 1
How can I overcome this issue?
You should pass two commands to sqlite at the same time:
echo -e '.separator "#"\n.import output log_dump' | sqlite log.db

Resources