In testing I used the db:clear command in an effort to truncate my data. The command description states:
Clear all tables except for Passport and Laravel tables
However when I ran it, the following was output, and the tables were truncated:
Truncated oauth_access_tokens
Truncated oauth_auth_codes
Truncated oauth_personal_access_clients
Truncated oauth_refresh_tokens
Based on the command description, I was not expecting these tables to be truncated. Is this expectation incorrect? I attempted to find the command in question to see if this was intended functionality, but could not find it in the Laravel source.
Laravel version 8.x
After further investigation into the data after clearing the DB, this appears to have only touched some of the Passport tables. I was not expecting to see any Passport tables in the command output based on the command description, but it seems it does truncate Passport tables that have non-reusable data like access tokens and auth codes. It does keep data for things like clients though.
It is still unclear to me where the command is located, but that was a secondary concern.
Related
I am using Laravel with the project I am currently working, the question above is one of the necessary thing that needs to be implement on the project.
So there should be a column that needs to reset everyday for all users and this project may contain hundreds or thousands of users and what should be the best way to do it that will not cause performance issues or server overloading.
I wanted to use Laravel's own scheduling but I not sure if this is quite the right thing to do.
Please help :)
You should create a job and schedule a command for this. So, with command even if you have to reset the column manually you can just run this command.
So, what you would do is:
php artisan make:command ResetColumnCommand
and, to generate a job:
php artisan make:job ResetColumnJob
and then, inside the job write the logic which would be similar to this:
$query = SomeModel::query();
$query->chunk(100, function($data) {
$data->update([
// Set the column to null
]);
});
Since, you're sure that there can be alot of records, you should definitely use chunk in order to reduce your memory usage.
NOTE: If you're applying any condition before chunk on a column. e.g: You want only records where that column is not already null then you should use chunkById instead of chunk because there are some issues that can be encountered with chunk and unexpected results.
Although the TYPO3 core takes good care of having all tables, there might be situations where you need to check if a table exists.
The situation at hand is an Update Wizard which interacts with another extension, where the other extension has a migration changing table names.
So: how to check if a table exists in current TYPO3, thus using doctrine and possibly even multiple database connections
At least for 10LTS, 11LTS and (as of now probably 12LTS too)
return GeneralUtility::makeInstance(ConnectionPool::class)
->getConnectionForTable($tablename)
->getSchemaManager()
->tablesExist([$tablename]);
This works because if no connection for the table is defined because the table doesn't exist, still the default connection is used and a check can be done there.
I am loading a csv file into my database using SQL Loader. My requirement is to create an error file combining the error records from .bad file and their individual errors from the log file. Meaning if a record has failed because the date is invalid, against that record in a separate column of error description , Invalid date should be written. Is there any way that SQL loader provides to combine the too. I am a newbie to SQL loader.
Database being used Oracle 19.c
You might be expecting a little bit too much of SQL*Loader.
How about switching to external table? In the background, it still uses SQL*Loader, but source data (which resides in a CSV file) is accessible to you by the means of a table.
What does it mean to you? You'd write some (PL/)SQL code to fetch data from it. Therefore, if you wrote a stored procedure, there are numerous options you can use - perform various validations, store valid data into one table and invalid data into another, decide what to do with invalid values (discard? Modify to something else? ...), handle exceptions - basically, everything PL/SQL offers.
Note that this option (generally speaking) requires the file to reside on the database server, in a directory which is a target of Oracle directory object. User which will be manipulating CSV data (i.e. the external table) will have to acquire privileges on that directory from the owner - SYS user.
SQL*Loader, on the other hand, runs on a local PC so you don't have to have access to the server itself but - as I said - doesn't provide that much flexibility.
it is hard to give you a code answer without the example.
If you want to do your task I can suggest two ways.
From Linux.
If you loaded data and skipped the errors, you must do two executions.
That is not an easy way and not effective.
From Oracle.
Create a table with VARCHAR2 columns with the same length as in the original.
Load data from bad_file. Convert your CTL adapted to everything. And try to load in
the second table.
Finally MERGE the columns to original.
I made a command and registered it in Kernel.php named "offers:update".
This command should do some changes on "offers" table depending on current status of each offer daily at 00:00.
My "offers" table might have above 100,000 rows and i want to do changes in the the most optimized way.
I've read the documentation (v 5.8) and found chunk() method.
Is it enough or there is better idea?
After running 'generateChangelog' on an Oracle database, the changelogFile has wrong type (or even better, simply bad value) for some fields, independently of the used driver.
More closer, some of the RAW columns are translated to STRING (it sounds okay), but values like "E01005C6842100020200000E10000000" are translated to "[B#433defed". Which seems to be some blob like entity. Also, these are the only data related differences between the original database content and backup.
When I try to restore the DB by 'update', these columns show problems "Unexpected error running Liquibase: *****: invalid hex number".
Is there any way forcing liquibase to save the problem columns "as-is", or anything else to overcome this situation? Or is it a bug?
I think more information is needed to be able to diagnose this. Ideally, if you suspect something may be a bug, you provide three things:
what steps you took (this would include the versions of things being used, relevant configuration, commands issued, etc.)
what the actual results were
what the expected results were
Right now we have some idea of 1 (ran generateChangelog on Oracle, then tried to run update) but we are missing things like what the structure of the Oracle database was, what version of Oracle/Liquibase, and what was the actual command issued. We have some idea of the actual results (columns that are of type RAW in Oracle are converted to STRING in the changelog, and it may be also converting the data in those columns to different values than you expect) and some idea of the expected results (you expected the RAW data to be saved in the changelog and then be able to re-deploy that change).
That being said, using Liquibase to back up and restore a database (especially one that has columns of type RAW/CLOB/BLOB) is probably not a good idea.
Liquibase is primarily aimed at helping manage change to the structure of a database and not so much with the data contained within it.