got error 22 from storage engine mysql - windows

mysqldump: Error: 'got error 22 from storage engine' when trying to dump
tablespaces
mysqldump: Got error: 23: Out of resources when opening file '.\database\table.MYD' (Errcode: 24) when using LOCK TABLES
i got this error when trying to make a dump in any database that I select , looks like that database is corrupted , is possible repair that ?

You seem to have reached the maximum number of open files. This limit is either MySQL's or the system's.
increase the value for the open_files_limit in your MySQL configuration file (this directive does not exist in a default installation, so you might need to create it in the [mysqld] section)
increase the limit at system level (but I am not sure this applies to Windows)

Here are some reasons for this error:
Type “source path-to-SQL-file“. BUT, you must follow these rules:
Use the full source command, not the . shortcut.
Have no spaces in your path. I copied mine to a root of a drive. Note that spaces in the file name is OK, just not the path.
Do not quote the file name, even if it has spaces. This gave error 22.
Use forward slashes in the path, e.g., C:/path/to/filename.sql. Otherwise you’ll get error 2.
Do not end with a semicolon.

Please check your read write access to the drive where you have stored your mySQL database.
error 22 occurred usually when you have no write access to that drive.

Related

Import file failed to greenplum because of one line of data on navicate

When importing a file into Greenplum,one lines fails,and the whole file is not imported successfully.Is there a way can skip the wrong line and import other data into Greenplum successfully?
Here are my SQL execution and error messages:
copy cjh_test from '/gp_wkspace/outputs/base_tables/error_data_test.csv' using delimiters ',';
ERROR: invalid input syntax for integer: "FE00F760B39BD3756BCFF30000000600"
CONTEXT: COPY cjh_test, line 81, column local_city: "FE00F760B39BD3756BCFF30000000600"
Greenplum has an extension to the COPY command that lets you log errors and set up a certain amount of errors that can occur that won't stop the load. Here is an example from the documentation for the COPY command:
COPY sales FROM '/home/usr1/sql/sales_data' LOG ERRORS
SEGMENT REJECT LIMIT 10 ROWS;
That tells COPY that 10 bad rows can be ignored without stopping the load. The reject limit can be # of rows or a percentage of the load file. You can check the full syntax in psql with: \h copy
If you are loading a very large file into Greenplum, I would suggest looking at gpload or gpfdist (which also support the segment reject limit syntax). COPY is single threaded through the master server where gpload/gpfdist load the data in parallel to all segments. COPY will be faster for smaller load files and the others will be faster for millions of rows in a load file(s).

what is the file ORA_DUMMY_FILE.f in oracle?

oracle version: 12.2.0.1
As you know, these are then unix processes for the parallel servers in oracle:
ora_p000_ora12c
ora_p001_ora12c
....
ora_p???_ora12c
They can be seen also with the view: gv$px_process.
The spid for each parallel server can be obtained from there.
Then I look for the open files associated with te parallel server here:
ls -l /proc/<spid>/fd
And I'm obtaining around 500-10000 file descriptors for several parallel servers equal to this one:
991 -> /u01/app/oracle/admin/ora12c/dpdump/676185682F2D4EA0E0530100007FFF5E/ORA_DUMMY_FILE.f (deleted)
I've deleted them using:(actually I've create a small script for doing it because there are thousands of them)
gdb -p <spid>
gdb> p close(<fd_id>)
But after some hours the file descriptors start being created again (hundreds every day)
If they are not deleted then eventually the linux limit is reached and any parallel query throws an error like this:
ORA-12801: error signaled in parallel query server P001
ORA-01116: error in opening database file 132
ORA-01110: data file 132: '/u02/oradata/ora12c/pdbname/tablespacenaname_ts_1.dbf'
ORA-27077: too many files open
Does anyone have any idea of how and why this file descriptors are being created, and how to avoid it?.
Edited: Added some more information that could be useful.
I've tested that when a new PDB is created a directory DATA_PUMP_DIR is created in it (select * from all_directories) that is pointing to:
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>
The linux directory is also created.
Also one file descriptor is created pointing to ORA_DUMMY_FILE.f in the new dpdump subdirectory like the ones described initially
lsof | grep "ORA_DUMMY_FILE.f (deleted)"
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>/ORA_DUMMY_FILE.f (deleted)
This may be ok, the problem I face is the continuos growing of the file descriptors pointing to ORA_DUMMY_FILE that reach the linux limits.

MapReduceIndexerTool output dir error "Cannot write parent of file"

I want to use Cloudera's MapReduceIndexerTool to understand how morphlines work. I created a basic morphline that just reads lines from the input file and I tried to run that tool using that command:
hadoop jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-*-job.jar org.apache.solr.hadoop.MapReduceIndexerTool \
--morphline-file morphline.conf \
--output-dir hdfs:///hostname/dir/ \
--dry-run true
Hadoop is installed on the same machine where I run this command.
The error I'm getting is the following:
net.sourceforge.argparse4j.inf.ArgumentParserException: Cannot write parent of file: hdfs:/hostname/dir
at org.apache.solr.hadoop.PathArgumentType.verifyCanWriteParent(PathArgumentType.java:200)
The /dir directory has 777 permissions on it, so it is definitely allowed to write into it. I don't know what I should do to allow it to write into that output directory.
I'm new to HDFS and I don't know how I should approach this problem. Logs don't offer me any info about that.
What I tried until now (with no result):
created a hierarchy of 2 directories (/dir/dir2) and put 777 permissions on both of them
changed the output-dir schema from hdfs:///... to hdfs://... because all the examples in the --help menu are built that way, but this leads to an invalid schema error
Thank you.
It states 'cannot write parent of file'. And the parent in your case is /. Take a look into the source:
private void verifyCanWriteParent(ArgumentParser parser, Path file) throws ArgumentParserException, IOException {
Path parent = file.getParent();
if (parent == null || !fs.exists(parent) || !fs.getFileStatus(parent).getPermission().getUserAction().implies(FsAction.WRITE)) {
throw new ArgumentParserException("Cannot write parent of file: " + file, parser);
}
}
In the message printed is file, in your case hdfs:/hostname/dir, so file.getParent() will be /.
Additionally you can try the permissions with hadoop fs command, for example you can try to create a zero length file in the path:
hadoop fs -touchz /test-file
I solved that problem after days of working on it.
The problem is with that line --output-dir hdfs:///hostname/dir/.
First of all, there are not 3 slashes at the beginning as I put in my continuous trying to make this work, there are only 2 (as in any valid HDFS URI). Actually I put 3 slashes because otherwise, the tool throws an invalid schema exception! You can easily see in this code that the schema check is done before the verifyCanWriteParent check.
I tried to get the hostname by simply running the hostname command on the Cent OS machine that I was running the tool on. This was the main issue. I analyzed the /etc/hosts file and I saw that there are 2 hostnames for the same local IP. I took the second one and it worked. (I also attached the port to the hostname, so the final format is the following: --output-dir hdfs://correct_hostname:8020/path/to/file/from/hdfs
This error is very confusing because everywhere you look for the namenode hostname, you will see the same thing that the hostname command returns. Moreover, the errors are not structured in a way that you can diagnose the problem and take a logical path to solve it.
Additional information regarding this tool and debugging it
If you want to see the actual code that runs behind it, check the cloudera version that you are running and select the same branch on the official repository. The master is not up to date.
If you want to just run this tool to play with the morphline (by using the --dry-run option) without connecting to Solr and playing with it, you can't. You have to specify a Zookeeper endpoint and a Solr collection or a solr config directory, which involves additional work to research on. This is something that can be improved to this tool.
You don't need to run the tool with -u hdfs, it works with a regular user.

How to do BULK INSERT in Oracle Database

I am trying to do a bulk insert into tables from a CSV file using Oracle11. My problem is that the database is on a remote machine which I can sqlpl to using this:
sqlpl username#oracle.machineName
Unfortunately the sqlldr has trouble connecting using the following command:
sqlldr userid=userName/PW#machinename control=BULK_LOAD_CSV_DATA.ctl log=sqlldr.log
Error is:
Message 2100 not found; No message file for product=RDBMS, facility=ULMessage 2100 not found; No message file for product=RDBMS, facility=UL
Now having given up on this approach I tried writing a basic sql script, but I am unsure of the proper Oracle keyword for BULK. I know this works in MySql but I get:
unknown command beginning "BULK INSER..."
When running the script:
BULK INSERT <TABLE_NAME>
FROM 'CSVFILE.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
I don't care which one works! Either one will do, I just need a little help.
Sorry I am a dumb dumb! I forgot to add oracle/bin to my path!
If you have found this post, add the bin directory to your path (linux) using the following commands:
export ORACLE_HOME=/path/to/oracle/client
export PATH=$PATH:$ORACLE_HOME/bin
Sorry if I wasted anyone's time ....

PHP session handling errors

I have this at the very top of my send.php file:
ob_start();
#session_start();
//some display stuff
$_SESSION['id'] = $id; //$id has a value
header('location: test.php');
And the following at the very top of my test.php file:
ob_start();
#session_start();
error_reporting(E_ALL);
ini_set('display_errors', '1');
print_r($_SESSION);
When the data sends to test.php, the following is displayed:
Array ( )
Warning: Unknown: open(/var/lib/php/session/sess_isu2r2bqudeosqvpoo8a67oj02, O_RDWR) failed: Permission denied (13) in Unknown on line 0
Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0
I've tried only using session_start(); but the results are the same.
Look at your message
So first thing it relate to permission
open(/var/lib/php/session/sess_isu2r2bqudeosqvpoo8a67oj02, O_RDWR) failed: Permission denied (13) in Unknown on line 0
you have to check file permission
change mode this /var/lib/php/session/
Second thing it relate to session.save_path
Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0
in php.ini
[Session]
; Handler used to store/retrieve data.
session.save_handler = files
; Argument passed to save_handler. In the case of files, this is the path
; where data files are stored. Note: Windows users have to change this
; variable in order to use PHP's session functions.
;
; As of PHP 4.0.1, you can define the path as:
;
; session.save_path = "N;/path"
;
; where N is an integer. Instead of storing all the session files in
; /path, what this will do is use subdirectories N-levels deep, and
; store the session data in those directories. This is useful if you
; or your OS have problems with lots of files in one directory, and is
; a more efficient layout for servers that handle lots of sessions.
;
; NOTE 1: PHP will not create this directory structure automatically.
; You can use the script in the ext/session dir for that purpose.
; NOTE 2: See the section on garbage collection below if you choose to
; use subdirectories for session storage
;
session.save_path = /tmp/ <= HERE YOU HAVE TO MAKE SURE
; Whether to use cookies.
session.use_cookies = 1
you have to change your session.save_path setting to the accessible dir, /tmp/ for example
How to change: http://php.net/session_save_path
Being on the shared host, it is advised to set your session save path inside of your home directory but below document root
also note that
using ob_start is unnecessary here,
and I am sure you put # operator by accident and already going to remove it forever, don't you?
This was a known bug in version(s) of PHP . Depending on your server environment, you can try setting the sessions folder to 777:
/var/lib/php/session (your location may vary)
I ended up using this workaround:
session_save_path('/path/not/accessable_to_world/sessions');
ini_set('session.gc_probability', 1);
You will have to create this folder and make it writeable. I havent messed around with the permissions much, but 777 worked for me (obviously).
Make sure the place where you are storing your sessions isn't accessible to the world.
This solution may not work for everyone, but I hope it helps some people!
You can fix the issue with the following steps:
Verify the folder exists with sudo cd /var/lib/php/session. If it does not exist then sudo mkdir /var/lib/php/session or double check the logs to make sure you have the correct path.
Give the folder full read and write permissions with sudo chmod 666 /var/lib/php/session.
Rerun you script and it should be working fine, however, it's not recommended to leave the folder with full permissions. For security, files and folders should only have the minimum permissions required. The following steps will fix that:
You should already be in the session folder so just run sudo ls -l to find out the owner of the session file.
Set the correct owner of the session folder with sudo chown user /var/lib/php/session.
Give just the owner full read and write permissions with sudo chmod 600 /var/lib/php/session.
NB
You might not need to use the sudo command.
Go to your PHP.ini file or find PHP.ini EZConfig on your Cpanel and set your session.save_path to the full path leading to the tmp file, i.e: /home/cpanelusername/tmp
please make sure the session.save_path is set correctly in the php.ini. php needs read/write access to the directory to which this variable is set.
more information: http://www.php.net/manual/en/session.configuration.php#ini.session.save-path
I had the same error everything was correct like the setting the folder permissions.
It looks like an bug in php in my case because when i delete my PHPSESSID cookie it was working again so aperently something was messed up and the session got removed but the cookie was still active so php had to define the cause differently and checking first if the session file is still they and give another error and not the permission error
When using latest WHM (v66.0.23) you may go to MultiPHP INI Editor choose PHP version and set session.save_path to default i.e. /var/cpanel/php/sessions/ea-php70 instead of previous simple tmp - this helped me to get rid of such errors.
When using the header function, php does not trigger a close on the current session. You must use session_write_close to close the session and remove the file lock from the session file.
ob_start();
#session_start();
//some display stuff
$_SESSION['id'] = $id; //$id has a value
session_write_close();
header('location: test.php');
check your cpanels space.remove unused file or error.log file & then try to login your application(This work for me);
I got these two error messages, along with two others, and fiddled around for a while before discovering that all I needed to do was restart XAMPP! I hope this helps save someone else from the same wasted time!
Warning: session_start(): open(/var/folders/zw/hdfw48qd25xcch5sz9dd3w600000gn/T/sess_f8bgs41qn3fk6d95s0pfps60n4, O_RDWR) failed: Permission denied (13) in /Applications/XAMPP/xamppfiles/htdocs/foo/bar.php on line 3
Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at /Applications/XAMPP/xamppfiles/htdocs/foo/bar.php:3) in /Applications/XAMPP/xamppfiles/htdocs/foo/bar.php on line 3
Warning: Unknown: open(/var/lib/php/session/sess_isu2r2bqudeosqvpoo8a67oj02, O_RDWR) failed: Permission denied (13) in Unknown on line 0
Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0
I'm using php-5.4.45 and I got the same problem.
If you are a php-fpm user, try edit php-fpm.conf and change listen.owner and listen.group to the right one. My nginx user is apache, so here I change these to params to apache, then it works well for me.
For apache user, I guess you should edit your fast-cgi params refer the two params I mention above.
If you use a configured vhost and find the same error then you can override the default setting of php_value session.save_path under your <VirtualHost *:80>
#
# Apache specific PHP configuration options
# those can be override in each configured vhost
#
php_value session.save_handler "files"
php_value session.save_path "/var/lib/php/5.6/session"
php_value soap.wsdl_cache_dir "/var/lib/php/5.6/wsdlcache"
Change the path to your own '/tmp' with chmod 777.

Resources