here is elegant example how to download file and copy it to
/etc/yum.repo.d folder
example
REPOSITORY_SERVER=master_machine01
wget -nd -r -P /etc/yum.repos.d/ -A ".repo" "http://$REPOSITORY_SERVER/ambari/centos7/2.6.2.2-1/ambari.repo"
after above command ambari.repo file will copied to /etc/yum.repos.d/
note: the file amabri.rep path is
ls -ltr /var/www/html/ambari/centos7/2.6.2.2-1/ambari.repo
-rw-r--r-- 1 root users 304 Jun 11 2018 /var/www/html/ambari/centos7/2.6.2.2-1/ambari.repo
so this is the simple case
now what about path could be as ( with diff path's )
$REPOSITORY_SERVER/ambari/centos7/2.6.2.3-1/ambari.repo
or
$REPOSITORY_SERVER/ambari/centos7/2.6.2.2-4/ambari.repo
then how to use the cli with Wildcards
we try the following
wget -nd -r -P /etc/yum.repos.d/ -A ".repo" "http://$REPOSITORY_SERVER/ambari/centos7/*/ambari.repo"
but we get
HTTP request sent, awaiting response... 404 Not Found
2021-11-28 18:40:07 ERROR 404: Not Found.
or even with backslash
wget -nd -r -P /etc/yum.repos.d/ -A ".repo" "http://$REPOSITORY_SERVER/ambari/centos7/\*/ambari.repo"
BUT WITH THE SAME ERROR
any idea how to resolve this issue?
how to use the cli with Wildcards
It is not possible to perform a glob expansion with HTTP protocol. These are very unrelated technologies.
how to resolve this issue?
Devise and implement a method of getting the available files under certain path from an HTTP server. For example, contact the server administrator and ask him about it. Potentially, if the HTTP server supports serving a directory listing, recursively filter the listing to find all matching paths. Or find and query some other site that contains all the links and filter the obtained answer to extract all links, for example. Etc.
This is my setup:
I use AWS Batch that is running a custom Docker image
The startup.sh file is an entrypoint script that is reading the nth line of a text file and copying it from s3 into the docker.
For example, if the first line of the .txt file is 'Startup_00001/ Startup_000018 Startup_000019', the bash script reads this line, and uses a for loop to copy them over.
This is part of my bash script:
STARTUP_FILE_S3_URL=s3://cmtestbucke/Config/
Startup_FileNames=$(sed -n ${LINE}p file.txt)
for i in ${Startup_FileNames}
do
Startup_FileURL=${STARTUP_FILE_S3_URL}$i
echo $Startup_FileURL
aws s3 cp ${Startup_FileURL} /home/CM_Projects/ &
done
Here is the log output from aws:
s3://cmtestbucke/Config/Startup_000017
s3://cmtestbucke/Config/Startup_000018
s3://cmtestbucke/Config/Startup_000019
Completed 727 Bytes/727 Bytes (7.1 KiB/s) with 1 file(s) remaining download: s3://cmtestbucke/Config/Startup_000018 to Data/Config/Startup_000018
Completed 731 Bytes/731 Bytes (10.1 KiB/s) with 1 file(s) remaining download: s3://cmtestbucke/Config/Startup_000017 to Data/Config/Startup_000017
fatal error: *An error occurred (404) when calling the HeadObject operation: Key
"Config/Startup_000019 " does not exist.*
My s3 bucket certainly contains the object s3://cmtestbucke/Config/Startup_000019
I noticed this happens regardless of filenames. The last iteration always gives this error.
I tested this bash logic locally with the same aws commands. It copies all 3 files.
Can someone please help me figure out what is wrong here?
The problem was with EOL of the text file. It was set to Windows(CR LF). The docker image is running Ubuntu which caused the error. I changed the EOL to Unix(LF). The problem was solved.
I am using nifi to transfer files between ftp locations.
I have to transfer files from a sftp location to a ftp directory.
I have the below folder structure in the remote sftp location.
/rootfolder/
/subfolder1
/subfolder2
/subfolder3
I need to download respective files from each subfolder to a local directory which has the similar structure.
My workflow includes
ListSFTP -> FetchSFTP (3) -> PutFTP
In ListSFTP
Remote path: /rootfolder
In FetchSFTP1
Remote path: /rootfolder/subfolder1
In FetchSFTP2
Remote path: /rootfolder/subfolder2
In FetchSFTP3
Remote path: /rootfolder/subfolder3
But, this does not seem to work. can someone help me how i can transfer files from a remote sftp sub-folder(s).
Thanks,
Aadil
You should be able to set ListSFTP to recursive search and then coming out of ListSFTP each flow file will have attributes for "path" ad "filename".
Lets say you had one file under each directory in your example, you should get three flow files like the following:
ff 1
path = /rootfolder/subfolder1
filename = file1
ff 2
path = /rootfolder/subfolder2
filename = file2
ff 3
path = /rootfolder/subfolder3
filename = file3
You should only need one FetchSFTP processor with Remote Filename set to ${path}/${filename}.
If you have the same structure on your destination system, just set PutFTP's Remote Path to ${path}.
If you have a slightly different structure, use UpdateAttribute to modify "path" right before PutFTP.
mysqldump: Error: 'got error 22 from storage engine' when trying to dump
tablespaces
mysqldump: Got error: 23: Out of resources when opening file '.\database\table.MYD' (Errcode: 24) when using LOCK TABLES
i got this error when trying to make a dump in any database that I select , looks like that database is corrupted , is possible repair that ?
You seem to have reached the maximum number of open files. This limit is either MySQL's or the system's.
increase the value for the open_files_limit in your MySQL configuration file (this directive does not exist in a default installation, so you might need to create it in the [mysqld] section)
increase the limit at system level (but I am not sure this applies to Windows)
Here are some reasons for this error:
Type “source path-to-SQL-file“. BUT, you must follow these rules:
Use the full source command, not the . shortcut.
Have no spaces in your path. I copied mine to a root of a drive. Note that spaces in the file name is OK, just not the path.
Do not quote the file name, even if it has spaces. This gave error 22.
Use forward slashes in the path, e.g., C:/path/to/filename.sql. Otherwise you’ll get error 2.
Do not end with a semicolon.
Please check your read write access to the drive where you have stored your mySQL database.
error 22 occurred usually when you have no write access to that drive.
I have this at the very top of my send.php file:
ob_start();
#session_start();
//some display stuff
$_SESSION['id'] = $id; //$id has a value
header('location: test.php');
And the following at the very top of my test.php file:
ob_start();
#session_start();
error_reporting(E_ALL);
ini_set('display_errors', '1');
print_r($_SESSION);
When the data sends to test.php, the following is displayed:
Array ( )
Warning: Unknown: open(/var/lib/php/session/sess_isu2r2bqudeosqvpoo8a67oj02, O_RDWR) failed: Permission denied (13) in Unknown on line 0
Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0
I've tried only using session_start(); but the results are the same.
Look at your message
So first thing it relate to permission
open(/var/lib/php/session/sess_isu2r2bqudeosqvpoo8a67oj02, O_RDWR) failed: Permission denied (13) in Unknown on line 0
you have to check file permission
change mode this /var/lib/php/session/
Second thing it relate to session.save_path
Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0
in php.ini
[Session]
; Handler used to store/retrieve data.
session.save_handler = files
; Argument passed to save_handler. In the case of files, this is the path
; where data files are stored. Note: Windows users have to change this
; variable in order to use PHP's session functions.
;
; As of PHP 4.0.1, you can define the path as:
;
; session.save_path = "N;/path"
;
; where N is an integer. Instead of storing all the session files in
; /path, what this will do is use subdirectories N-levels deep, and
; store the session data in those directories. This is useful if you
; or your OS have problems with lots of files in one directory, and is
; a more efficient layout for servers that handle lots of sessions.
;
; NOTE 1: PHP will not create this directory structure automatically.
; You can use the script in the ext/session dir for that purpose.
; NOTE 2: See the section on garbage collection below if you choose to
; use subdirectories for session storage
;
session.save_path = /tmp/ <= HERE YOU HAVE TO MAKE SURE
; Whether to use cookies.
session.use_cookies = 1
you have to change your session.save_path setting to the accessible dir, /tmp/ for example
How to change: http://php.net/session_save_path
Being on the shared host, it is advised to set your session save path inside of your home directory but below document root
also note that
using ob_start is unnecessary here,
and I am sure you put # operator by accident and already going to remove it forever, don't you?
This was a known bug in version(s) of PHP . Depending on your server environment, you can try setting the sessions folder to 777:
/var/lib/php/session (your location may vary)
I ended up using this workaround:
session_save_path('/path/not/accessable_to_world/sessions');
ini_set('session.gc_probability', 1);
You will have to create this folder and make it writeable. I havent messed around with the permissions much, but 777 worked for me (obviously).
Make sure the place where you are storing your sessions isn't accessible to the world.
This solution may not work for everyone, but I hope it helps some people!
You can fix the issue with the following steps:
Verify the folder exists with sudo cd /var/lib/php/session. If it does not exist then sudo mkdir /var/lib/php/session or double check the logs to make sure you have the correct path.
Give the folder full read and write permissions with sudo chmod 666 /var/lib/php/session.
Rerun you script and it should be working fine, however, it's not recommended to leave the folder with full permissions. For security, files and folders should only have the minimum permissions required. The following steps will fix that:
You should already be in the session folder so just run sudo ls -l to find out the owner of the session file.
Set the correct owner of the session folder with sudo chown user /var/lib/php/session.
Give just the owner full read and write permissions with sudo chmod 600 /var/lib/php/session.
NB
You might not need to use the sudo command.
Go to your PHP.ini file or find PHP.ini EZConfig on your Cpanel and set your session.save_path to the full path leading to the tmp file, i.e: /home/cpanelusername/tmp
please make sure the session.save_path is set correctly in the php.ini. php needs read/write access to the directory to which this variable is set.
more information: http://www.php.net/manual/en/session.configuration.php#ini.session.save-path
I had the same error everything was correct like the setting the folder permissions.
It looks like an bug in php in my case because when i delete my PHPSESSID cookie it was working again so aperently something was messed up and the session got removed but the cookie was still active so php had to define the cause differently and checking first if the session file is still they and give another error and not the permission error
When using latest WHM (v66.0.23) you may go to MultiPHP INI Editor choose PHP version and set session.save_path to default i.e. /var/cpanel/php/sessions/ea-php70 instead of previous simple tmp - this helped me to get rid of such errors.
When using the header function, php does not trigger a close on the current session. You must use session_write_close to close the session and remove the file lock from the session file.
ob_start();
#session_start();
//some display stuff
$_SESSION['id'] = $id; //$id has a value
session_write_close();
header('location: test.php');
check your cpanels space.remove unused file or error.log file & then try to login your application(This work for me);
I got these two error messages, along with two others, and fiddled around for a while before discovering that all I needed to do was restart XAMPP! I hope this helps save someone else from the same wasted time!
Warning: session_start(): open(/var/folders/zw/hdfw48qd25xcch5sz9dd3w600000gn/T/sess_f8bgs41qn3fk6d95s0pfps60n4, O_RDWR) failed: Permission denied (13) in /Applications/XAMPP/xamppfiles/htdocs/foo/bar.php on line 3
Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at /Applications/XAMPP/xamppfiles/htdocs/foo/bar.php:3) in /Applications/XAMPP/xamppfiles/htdocs/foo/bar.php on line 3
Warning: Unknown: open(/var/lib/php/session/sess_isu2r2bqudeosqvpoo8a67oj02, O_RDWR) failed: Permission denied (13) in Unknown on line 0
Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0
I'm using php-5.4.45 and I got the same problem.
If you are a php-fpm user, try edit php-fpm.conf and change listen.owner and listen.group to the right one. My nginx user is apache, so here I change these to params to apache, then it works well for me.
For apache user, I guess you should edit your fast-cgi params refer the two params I mention above.
If you use a configured vhost and find the same error then you can override the default setting of php_value session.save_path under your <VirtualHost *:80>
#
# Apache specific PHP configuration options
# those can be override in each configured vhost
#
php_value session.save_handler "files"
php_value session.save_path "/var/lib/php/5.6/session"
php_value soap.wsdl_cache_dir "/var/lib/php/5.6/wsdlcache"
Change the path to your own '/tmp' with chmod 777.