I store my session in database with Codeigniter,
maybe stupid my question, but i do not catched if session data has a size limit, and i can't understand what will happen in the app if reach that limit?
Thank you
There is no limit when using database to save sessions. CI serializes the data using serialize and saves it to database.
Only limit then could be the field in the database, which differ in cases:
TEXT 65,535 bytes ~64kb
MEDIUMTEXT 16,777,215 bytes ~16MB
LONGTEXT 4,294,967,295 bytes ~4GB
you can store user_data in database with TEXT field size, TEXT field can store ~64kb.
hello guys set user_data in database with BLOB field size for fine et it 's recommanded by codeigniter documentation
CREATE TABLE IF NOT EXISTS `ci_sessions` (
`id` varchar(128) NOT NULL,
`ip_address` varchar(45) NOT NULL,
`timestamp` int(10) unsigned DEFAULT 0 NOT NULL,
`data` blob NOT NULL,
KEY `ci_sessions_timestamp` (`timestamp`)
);
Related
there are some similar questions but not what i exactly want.
how can i increase this limit. what is file destination where can i replace this number (767 bytes) with another.
-- Dumping structure for table jofr.categories
CREATE TABLE IF NOT EXISTS `categories` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`parent_id` int(10) unsigned DEFAULT NULL,
`order` int(11) NOT NULL DEFAULT '1',
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`slug` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `categories_slug_unique` (`slug`),
KEY `categories_parent_id_foreign` (`parent_id`),
CONSTRAINT `categories_parent_id_foreign` FOREIGN KEY (`parent_id`) REFERENCES `categories` (`id`) ON DELETE SET NULL ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
MySQL said: Documentation
#1071 - Specified key was too long; max key length is 767 bytes
You could set the default string length inside the AppServiceProvider.php file
app/Providers/AppServiceProvider.php
use Illuminate\Support\Facades\Schema;
public function boot()
{
Schema::defaultStringLength(191);
}
For more information, you can reference a similar question asked in Stackoverflow
This is a MySQL InnoDB limit, not a PHP one. You can only work around the limit using code, or you can potentially increase the limit using a database option.
The issue is with your index on slug. slug is defined as 255 characters, and you're using utf8mb4, which uses 4 bytes per character, so your index on slug would require 1020 bytes.
There are a couple workaround options.
1. Reduce the size of your field.
Instead of making your slug 255 characters, make it 191. 191 * 4 = 764 < 767. You can do this by specifying the field length in your migration, or by not specifying the length and setting \Illuminate\Support\Facades\Schema::defaultStringLength(191); as mentioned by others.
2. Reduce the size of your index.
You can keep your field size at 255, but tell MySQL to only index the first 191 characters. I don't know if Laravel migrations support this, but you can always try.
$table->index('slug(191)');
3. Enable the innodb_large_prefix database option with DYNAMIC row formats.
The innodb_large_prefix database option increases the key length limit to 3072 bytes. However, this option only affects tables that have a row format of DYNAMIC or COMPRESSED.
If you're on MySQL >= 5.7.7, the innodb_large_prefix option is enabled by default.
If you're on MySQL >= 5.7.9, the default row format is DYNAMIC, and the innodb_large_prefix option is enabled by default, so you wouldn't be having this issue, unless you've changed the defaults.
If you're on MySQL < 5.7.9, the default row format is COMPACT, so you'd need to figure out how to tell Laravel to use the DYNAMIC row format. If you want to do this for all tables, you can set 'engine' => 'InnoDB ROW_FORMAT=DYNAMIC', in your database config. If you only want to do this for one table, you'll need to run raw DB create statements in your migration file.
References:
MySQL create index documentation - information on key size, row formats, and partial indexes
MySQL innodb_large_prefix option
MySQL innodb_default_row_format option
problem is on your unique columns with a length bigger than 191
`slug` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
in this line replace varchar(255) with varchar(191) and this will be resolved
Read this.
edit your AppServiceProvider.php file and inside the boot method set a
default string length:
use Illuminate\Support\Facades\Schema;
public function boot()
{
Schema::defaultStringLength(191);
}
I'm getting an error when I use DataMapper's auto_upgrade! method to add fields in an SQLite3 db based on the properties defined in my code:
DataObjects::SyntaxError at /history
Cannot add a NOT NULL column with default value NULL
An example of an offending line would be:
property :fieldname, Text, required: true
The error goes away if I (a) remove the line, (b) remove required: true, (c) change true to false, or (d) add a default value.
SQLite does not require a default value to be specified for every field, so this problem is definitely with DataMapper, not SQLite.
How can I get around this, so DataMapper can specify that a field is required without assuming that not specifying a default value automatically means the default should be NULL?
(If you want to know more about why I'm designing this way: there will be another client process accessing SQLite and logging data into the SQLite database, while a Sinatra app will be pulling data out of the db for display in a browser. I want the database therefore to enforce the field requirements, but DM's auto_upgrade is a very convenient way to be able to upgrade the database as needed—so long as it doesn't foul things up in the process.)
You are requiring the field, hence it cannot be NULL. This is simple table properties.
When DataMapper runs auto_upgrade! it runs the SQL commands on the database.
CREATE TABLE Test
(
P_Id int NOT NULL,
lname varchar(255) NOT NULL,
fname varchar(255),
Address varchar(255),
City varchar(255)
)
And doing something like this won't work.
CREATE TABLE Test
(
P_Id int NOT NULL,
lname varchar(255) NOT NULL DEFAULT NULL,
fname varchar(255),
Address varchar(255),
City varchar(255)
)
I tested it in MySQL and this is the error.
02:52:43 CREATE TABLE TestTest ( P_Id int NOT NULL, lname
varchar(255) NOT NULL DEFAULT NULL, fname varchar(255), Address
varchar(255), City varchar(255) ) Error Code: 1067. Invalid default
value for 'lname' 0.062 sec
Correction: SQLite does allow you to create a table with such properties. However, when trying to insert anything to that table makes it throw an error whether the field is NULL or not. So DataMapper might be doing some sanitation for your before even creating the table.
It is not clear to me if you are creating a new table or modifying an existing one.
If you have an existing table and are trying to alter it with a column defined as NOT NULL, then you must provide a default value so that the existing rows can be migrated. The RDBMS needs to know what to put in the field for pre-existing rows.
If you are creating a new table, then the property definition you have should be fine.
For one of my custom module I have following sql setup script
$installer = $this;
$installer->startSetup();
$installer->run("
CREATE TABLE {$this->getTable('wholesale')} (
`wholesale_id` int(11) unsigned NOT NULL auto_increment,
`title` varchar(255) NOT NULL default '',
`percentage` float(5,3) NOT NULL,
`status` smallint(6) NOT NULL default '0',
`created_time` datetime NULL,
`update_time` datetime NULL,
PRIMARY KEY (`wholesale_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
");
$installer->endSetup();
The value passed to this percentage` float(5,3) NOT NULL, column is saving with decimal but always with 00 for example. If I save a value 10.23 it save in database as 10.00.
Please help
I ran into the same issue. After clearing the cache with "Flush Magento cache", it started to save properly. In my case, I had altered the column and it was being cached.
I truncated the table and inserted one record manually and it worked for me.
I am not sure why this happened but this worked for me.
I would like to know how to save the number of times a user access a webpage and return the session count to the user
Eg
A user opens his user profile on a web page and would be able to see
Users number of access: 20times.
I have created the page with Codeigniter and created a table in the database.
CREATE TABLE IF NOT EXISTS `ci_sessions` (
session_id varchar(40) DEFAULT '0' NOT NULL,
ip_address varchar(45) DEFAULT '0' NOT NULL,
user_agent varchar(120) NOT NULL,
last_activity int(10) unsigned DEFAULT 0 NOT NULL,
user_data text NOT NULL,
PRIMARY KEY (session_id),
KEY `last_activity_idx` (`last_activity`)
);
But I don't know how to save the number of times the user has accessed the webpage
I have a MySQL table consisting of:
CREATE TABLE `url_list` (
`id` int(10) unsigned NOT NULL auto_increment,
`crc32` int(10) unsigned NOT NULL,
`url` varchar(512) NOT NULL,
PRIMARY KEY (`id`),
KEY `crc32` (`crc32`)
);
When inserting data into a related table I need to lookup the primary key from this table, and using the crc32 really speeds that up whilst allowing a small index. The URLs do need to be unique, but I'd like to avoid having more index than actual data.
If the value isn't present I need to insert it, but using structures such as INSERT IGNORE, or ON DUPLICATE KEY either requires me to put a unique on the huge varchar, or don't take advantage of my index.
How can I "SELECT id else INSERT", whilst preserving the lookup speed for the 80-90% of hits that are already in the table?
I would recommend ditching the id column and the crc32 because they're not necessary.
You can use an MD5() hash to provide a fixed-length, virtually unique value computed from the lengthy URL data, and then use that hash as the primary key.
CREATE TABLE `url_list` (
`url_hash` BINARY(16) NOT NULL PRIMARY KEY
`url` VARCHAR(512) NOT NULL
);
DELIM !!
CREATE TRIGGER `url_ins` BEFORE INSERT ON `url_list`
FOR EACH ROW
BEGIN
SET NEW.`url_hash` = UNHEX( MD5( NEW.`url` ) );
END!!
Then you can use INSERT..ON DUPLICATE KEY UPDATE because unlike crc32, the hash should have a very low chance of collision.
edit: See http://en.wikipedia.org/wiki/Birthday_attack. If you log 1 million distinct URL's per day for 2,000 years, the MD5 hashes of these URL's are still less likely to include a collision than your hard disk is to have an uncorrectable bit error.
This website offers a solution to a similar problem.