On a new Rackspace server, I am getting the following error whenever I try to install and/or migrate a Craft 3.7.0 site from local environment:
PHP Warning – yii\base\ErrorException session_start():open(/var/lib/php/session/sess_6d4r4eip8iotcfif9hlbi701ap, O_RDWR) failed: Permission denied (13)
1. in /my/path at line 152
{
if ($this->getIsActive()) {
return;
}
$this->registerSessionHandler();
$this->setCookieParamsInternal();
YII_DEBUG ? session_start() : #session_start();
if ($this->getUseStrictMode() && $this->_forceRegenerateId) {
$this->regenerateID();
$this->_forceRegenerateId = null;
}
if ($this->getIsActive()) {
Yii::info('Session started', __METHOD__);
$this->updateFlashCounters();
2. in /my/path/vendor/craftcms/cms/src/web/ErrorHandler.php at line 76– yii\base\ErrorHandler::handleError() 70717273747576777879808182 {
// Because: https://bugs.php.net/bug.php?id=74980
if (PHP_VERSION_ID >= 70100 && strpos($message, 'Narrowing occurred during type inference. Please file a bug report') !== false) {
return null;
}
return parent::handleError($code, $message, $file, $line);
}
/**
* #inheritdoc
*/
public function getExceptionName($exception)
3. craft\web\ErrorHandler::handleError()
4. in /my/path/vendor/yiisoft/yii2/web/Session.php at line 152– session_start() 146147148149150151152153154155156157158 }
$this->registerSessionHandler();
$this->setCookieParamsInternal();
YII_DEBUG ? session_start() : #session_start();
if ($this->getUseStrictMode() && $this->_forceRegenerateId) {
$this->regenerateID();
$this->_forceRegenerateId = null;
}
5. in /my/path/vendor/yiisoft/yii2/web/Session.php at line 751– yii\web\Session::open() 745746747748749750751752753754755756757 /**
* #param mixed $key session variable name
* #return bool whether there is the named session variable
*/
public function has($key)
{
$this->open();
return isset($_SESSION[$key]);
}
/**
* Updates the counters for flash messages and removes outdated flash messages.
* This method should only be called once in [[init()]]
etc.
I've tried checking the owner/group permissions, setting the chmod to 774 and 777, to no avail. Also cleared the storage/runtime folder and public/cpresources folder on each attempt.
What could be going on? Running PHP 7.4.2 and the Craft requirements are all met via checkit.php file.
Much appreciate the help in advance!
You can see that PHP is trying to write temp files to store the session in, which is failing because the PHP process doesn't have the correct permissions for that folder:
session_start():open(/var/lib/php/session/sess_6d4r4eip8iotcfif9hlbi701ap, O_RDWR) failed: Permission denied
You can solve this in a number of ways:
Change your PHP configuration to save session in a different folder that is writable by the process – see session_save_path. You would need to do this in your index.php and your craft file BEFORE craft is initiated.
Change the permissions of the folder in the error message above so the PHP process has write access to it.
Configure Craft to store sessions in the database instead. See How can I store Craft sessions in the database?
Related
My website uses Drupal 8 and we are using Dropzonejs module for "media bulk upload" option. In my local environment, I'm able to bulk upload media without any problems. However, on server environment (having same configuration as local) when I try to bulk upload media, it throws "failed to open the output stream" error.
Any solutions/answers/suggestions are most welcome and much needed.
NOTE: Single item uploads work fine. Also during bulk upload, the error happens once the progress bar reaches 100%.
The defect screenshot is here
As of Drupal 8.6, we need a patch in the core for this functionality to work properly. For the fix, three files need to be changed as under:
drupal/core/includes/files.inc (Line 234)
/* #var \Drupal\Core\StreamWrapper\StreamWrapperInterface $wrapper */
if ($wrapper = \Drupal::service('stream_wrapper_manager')->getViaUri($uri)) {
return $wrapper->getExternalUrl();
}
return FALSE;
drupal/core/modules/locale/src/StreamWrapper/TranslationsStream.php (line 48 replace function)
public function getExternalUrl() {
return FALSE;
}
3.drupal/core/modules/locale/tests/src/Functional/LocaleImportFunctionalTest.php
(Add at line 365)
* Tests that imported PO files aren't break the UI provided by "views".
*
* #throws \Behat\Mink\Exception\ExpectationException
*
* #link https://www.drupal.org/project/drupal/issues/2449895
*/
public function testPoFileImportAndAccessibilityOfFilesOverviewViewsPage() {
$this->container
->get('module_installer')
->install(['system', 'user', 'file', 'views']);
// Create and log in a user that's able to upload/import translations
// and has an access to the overview of files in a system.
$this->drupalLogin($this->drupalCreateUser([
'access administration pages',
'access files overview',
'administer languages',
'translate interface',
]));
// Import a dummy PO file.
$this->importPoFile($this->getPoFile(), [
'langcode' => 'fr',
]);
// The problem this test cover is exposed in an exception that is thrown
// by the "\Drupal\locale\StreamWrapper\TranslationsStream" when "views"
// module provides a page of files overview. Refer to the issue to find
// more information.
$this->drupalGet('admin/content/files');
$this->assertSession()->statusCodeEquals(200);
}
(At line 373, overwrite the following function)
public function importPoFile($contents, array $options = []) {
$file_system = $this->container->get('file_system');
$file_path = $file_system->tempnam('temporary://', 'po_') . '.po';
file_put_contents($file_path, $contents);
$options['files[file]'] = $file_path;
$this->drupalPostForm('admin/config/regional/translate/import', $options,
t('Import'));
$file_system->unlink($file_path);
}
I just moved my Magento site to Amazon EC2, but keep getting "Connection to Redis failed after 2 failures" error. I've tried to remove the redis configuration from app/etc/local.xml but still get that error.
I also tried to disable all the cache options directly from core_cache_option table. I have no idea how to clean the already cached files. No cache files under var/cache folder as expected and I've tried to flushall from redis-cli command prompt, but still keep getting this error.
Any idea what else should I try?
<cache>
<backend_options>
<server><![CDATA[/var/tmp/_cache.sock]]></server>
<port><![CDATA[0]]></port>
<persistent><![CDATA[]]></persistent>
<database><![CDATA[0]]></database>
<password><![CDATA[]]></password>
<connect_retries><![CDATA[1]]></connect_retries>
<read_timeout><![CDATA[10]]></read_timeout>
<automatic_cleaning_factor><![CDATA[0]]></automatic_cleaning_factor>
<compress_data><![CDATA[1]]></compress_data>
<compress_tags><![CDATA[1]]></compress_tags>
<compress_threshold><![CDATA[20480]]></compress_threshold>
<compression_lib><![CDATA[gzip]]></compression_lib>
<use_lua><![CDATA[0]]></use_lua>
</backend_options>
<backend><![CDATA[Cm_Cache_Backend_Redis]]></backend>
</cache>
Given EC2 instances are ephemeral, you should be able to regenerate the instance, right? If that's not an option —
First, Check app/etc/ for other XML files. Magento will parse any XML files it finds in this folder. I've seen something like the following trip people up
$ ls app/etc/*.xml
local.xml
local.backup.xml
Magento parses both local.xml and local.backup.xml, and the backup values override the new values in local.xml. Also, make sure you're working with the local.xml you think you are. Magento loads the local configuration in the following location. Add some temporary debugging to make sure it's doing what you think it's doing.
#File: app/code/core/Mage/Core/Model/Config.php
public function loadBase()
{
$etcDir = $this->getOptions()->getEtcDir();
$files = glob($etcDir.DS.'*.xml');
$this->loadFile(current($files));
while ($file = next($files)) {
var_dump($file);
$merge = clone $this->_prototype;
$merge->loadFile($file);
$this->extend($merge);
}
if (in_array($etcDir.DS.'local.xml', $files)) {
$this->_isLocalConfigLoaded = true;
}
return $this;
}
Second, after you clear your cache, make sure Magento's reloading the configuration. Add some temporary debugging to
#File: app/code/core/Mage/Core/Model/Config.php
public function init($options=array())
{
$this->setCacheChecksum(null);
$this->_cacheLoadedSections = array();
$this->setOptions($options);
$this->loadBase();
$cacheLoad = $this->loadModulesCache();
if ($cacheLoad) {
var_dump("Loaded Config from Cache");
return $this;
}
else
{
var_dump("Reloading configuration");
}
$this->loadModules();
$this->loadDb();
$this->saveCache();
return $this;
}
Finally, if you suspect the problem is a file based cache not clearing, drop some debugging code in
#File: app/code/core/Mage/Core/Model/Config/Options.php
public function getCacheDir()
{
//$dir = $this->getDataSetDefault('cache_dir', $this->getVarDir().DS.'cache');
$dir = $this->_data['cache_dir'];
$this->createDirIfNotExists($dir);
var_dump($dir);
return $dir;
}
This will let you know the cache directory Magento's reading from — if Magento can't read the local var, it'll pop up to the root level /var/ folder.
Background: I'm working on an api which I host on ec2 servers. I just finish the login and set up an nginx loadbalancer which redirect to the said server's internal ip's. The domain name points to the load balancer.
This used to work well with code igniter, but now I keep getting an "invalid host" problem.
I tried googling it and it found some things about trusted proxies so I installed what fideloper made and tried his post as well (I've followed a guide by fideloper on laravel-4-trusted-proxies and used and tried his trusted sample on github: fideloper/TrustedProxy) but I still get the same error:
UnexpectedValueException
Invalid Host "api.myserver.im, api.myserver.im"
// as the host can come from the user (HTTP_HOST and depending on the configuration, SERVER_NAME too can come from the user)
// check that it does not contain forbidden characters (see RFC 952 and RFC 2181)
if ($host && !preg_match('/^\[?(?:[a-zA-Z0-9-:\]_]+\.?)+$/', $host)) {
throw new \UnexpectedValueException(sprintf('Invalid Host "%s"', $host));
}
Can someone help me?
I had the same issue as well. I had to resort to modifying the UrlGenerator.php file, which is part of the framework (bad I know...) just to get this to work.
So here's my "temporary" solution.
Create an array value to your app.php config file. e.g:
return array(
'rooturl' => 'https://www.youractualdomainname.com',
...
Next add the below modification in your UrlGenerator.php file <-- (trunk/vendor/laravel/framework/src/Illuminate/Routing/UrlGenerator.php)
<?php namespace Illuminate\Routing;
use Config;
...
protected function getRootUrl($scheme, $root = null)
{
$approoturl = Config::get('app.rooturl');
$root = isset($approoturl) ? $approoturl : $this->request->root();
return $root;
// if (is_null($root))
// {
// $root = $this->forcedRoot ?: $this->request->root();
// }
// $start = starts_with($root, 'http://') ? 'http://' : 'https://';
// return preg_replace('~'.$start.'~', $scheme, $root, 1);
}
Do note that composer update will revert your modification.
I'm using Hybridauth social login, and upon a user authenticating with Facebook, I receive the following error:
Warning: array_key_exists() [function.array-key-exists]: The second
argument should be either an array or an object in
/hybridauth/Hybrid/thirdparty/Facebook/base_facebook.php on line 1328
My guess (probably wrong) to why this may be happening is because the parameters used to pass to Hybridauth come from the browser URL, and I have two - page=register & connected_with=facebook. Hybridauth only requires the second one...
It actually authenticates, but I want rid of this error. Why does this warning occur? Is there a way to hide it?
This is the bit that errors:
/**
* Get the base domain used for the cookie.
*/
protected function getBaseDomain() {
// The base domain is stored in the metadata cookie
// if not we fallback to the current hostname
$metadata = $this->getMetadataCookie();
if (array_key_exists('base_domain', $metadata) &&
!empty($metadata['base_domain'])) {
return trim($metadata['base_domain'], '.');
}
return $this->getHttpHost();
}
It's this code the warning comes from:
/**
* Destroy the current session
*/
public function destroySession() {
$this->accessToken = null;
$this->signedRequest = null;
$this->user = null;
$this->clearAllPersistentData();
// JavaScript sets a cookie that will be used in getSignedRequest
// that we need to clear if we can
$cookie_name = $this->getSignedRequestCookieName();
if (array_key_exists($cookie_name, $_COOKIE)) {
unset($_COOKIE[$cookie_name]);
if (!headers_sent()) {
$base_domain = $this->getBaseDomain();
setcookie($cookie_name, '', 1, '/', '.'.$base_domain);
} else {
// #codeCoverageIgnoreStart
self::errorLog(
'There exists a cookie that we wanted to clear that we couldn\'t '.
'clear because headers was already sent. Make sure to do the first '.
'API call before outputting anything.'
);
// #codeCoverageIgnoreEnd
}
}
}
It looks like getMetadataCookie() does not always return an array, possibly because the cookie has not yet been set. You may want to check that it's actually an array before using it as such;
if (is_array($metadata) && array_key_exists('base_domain', $metadata) &&
For the added code, the same would apply to array_key_exists() in the new code. If you're unsure if it's actually set to an array if the cookie is not set, check first.
I am trying to get the new PDO driver running in Code Igniter 2.1.1 in (to start with) the local (Mac OS 10.7) copy of my app.
I initially coded it using Active Record for all db operations, and I am now thinking I want to use PDO prepared statements in my model files, going forward.
I modified 'application/config/database.php' like so:
(note a couple minor embedded questions)
[snip]
$active_group = 'local_dev';
$active_record = TRUE;//<---BTW, will this need to stay TRUE to make CI sessions work? For better security, don't we want db-based CI sessions to use PDO too?
//http://codeigniter.com/user_guide/database/configuration.html:
//Note: that some CodeIgniter classes such as Sessions require Active Records be enabled to access certain functionality.
//this is the config setting that I am guessing (?) is my main problem:
$db['local_dev']['hostname'] = 'localhost:/tmp/mysql.sock';
// 1.) if $db['local_dev']['dbdriver']='mysql', then here ^^^ 'localhost:/tmp/mysql.sock' works, 2.) but if $db['local_dev']['dbdriver']='pdo', then it fails with error msg. shown below.
$db['local_dev']['username'] = 'root';
$db['local_dev']['password'] = '';
$db['local_dev']['database'] = 'mydbname';
$db['local_dev']['dbdriver'] = 'pdo';
$db['local_dev']['dbprefix'] = '';
$db['local_dev']['pconnect'] = TRUE;
$db['local_dev']['db_debug'] = TRUE;//TRUE
$db['local_dev']['cache_on'] = FALSE;
$db['local_dev']['cachedir'] = '';
$db['local_dev']['char_set'] = 'utf8';
$db['local_dev']['dbcollat'] = 'utf8_general_ci';
$db['local_dev']['swap_pre'] = '';
$db['local_dev']['autoinit'] = TRUE;
$db['local_dev']['stricton'] = FALSE;
[snip]
With the above config., as soon as I load a controller, I get this error message:
Fatal error: Uncaught exception 'PDOException' with message 'could not find driver' in
/Library/WebServer/Documents/system/database/drivers/pdo/pdo_driver.php:114 Stack trace: #0
/Library/WebServer/Documents/system/database/drivers/pdo/pdo_driver.php(114): PDO->__construct('localhost:/tmp/...', 'root', '', Array) #1 /Library/WebServer/Documents/system/database/DB_driver.php(115): CI_DB_pdo_driver->db_pconnect() #2
/Library/WebServer/Documents/system/database/DB.php(148): CI_DB_driver->initialize() #3
/Library/WebServer/Documents/system/core/Loader.php(346): DB('', NULL) #4
/Library/WebServer/Documents/system/core/Loader.php(1171): CI_Loader->database() #5
/Library/WebServer/Documents/system/core/Loader.php(152): CI_Loader->_ci_autoloader() #6
/Library/WebServer/Documents/system/core/Con in
/Library/WebServer/Documents/system/database/drivers/pdo/pdo_driver.php on line 114
I tried swapping out the 'pdo_driver.php' file from the one on github, as per this:
http://codeigniter.com/forums/viewthread/206124/
...but that just generates other errors, not to mention is disturbing to a newbie who does not want to touch the system files if at all possible.
This thread also seems to imply the need to be hacking the 'pdo_driver.php' system file:
CodeIgniter PDO database driver not working
It seems odd to me, though, that (someone thought that) a hack to a system file is needed to make PDO work in CI v.2.1.1, huh?
Thanks for any suggestions I can try.
I don't know if this might be helpful for you since you already started using the CI functions, but I made my own library for PDO with sqlite and just auto load it. My needs were simple, so it serves its purpose.
<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');
/**
* CodeIgniter PDO Library
*
*
* #author Michael Cruz
* #version 1.0
*/
class Sqlite_pdo
{
var $DB;
public function connect($path) {
try {
$this->DB = new PDO('sqlite:' . $path);
}
catch(PDOException $e) {
print "Error: " . $e->getMessage();
die();
}
}
public function simple_query($SQL) {
$results = $this->DB->query($SQL)
or die('SQL Error: ' . print_r($this->DB->errorInfo()));
return $results;
}
public function prepared_query($SQL, $bind = array()) {
$q = $this->DB->prepare($SQL)
or die('Prepare Error: ' . print_r($this->DB->errorInfo()));
$q->execute($bind)
or die('Execute Error: ' . print_r($this->DB->errorInfo()));
$q->setFetchMode(PDO::FETCH_BOTH);
return $q;
}
public function my_prepare($SQL) {
$q = $this->DB->prepare($SQL)
or die('Error: ' . print_r($this->DB->errorInfo()));
return $q;
}
public function my_execute($q, $bind) {
$q->execute($bind)
or die('Error: ' . print_r($this->DB->errorInfo()));
$q->setFetchMode(PDO::FETCH_BOTH);
return $q;
}
public function last_insert_id() {
return $this->DB->lastInsertId();
}
}
/* End of file Sqlite_pdo.php */
thanks to the noob thread http://codeigniter.com/forums/viewthread/180277/ (InsiteFX’s answer)..
I figured out the below seems to work (need to test more to be 100%... but at least the error messages are gone:
$db['local_dev']['hostname'] = 'mysql:host=127.0.0.1';