Oracle 10g: Can CLOB data lengths be less than 4,000? - oracle

We have three databases: dev, staging, and production. We do all our coding in the dev environment. We then push all our code and database changes to staging so the client can see how it works in a live environment. After they sign off, we do the final deployment to the production environment.
Now, about these CLOB columns: When using desc and/or querying the all_tab_columns view for the dev database, CLOBs show a data length of 4,000. However, in the staging and production databases, data lengths for dev-equivalent CLOB columns are odd numbers like 86. I've searched for every possible solution as to how this could have come about. I've even tried adding a new CLOB(86) column thinking it would work like it does for VARCHAR2, but Oracle just spits out an error.
Could the DBAs have botched something up? Is this even something to worry about? Nothing has ever seemed to break as a result of this, but I just like the metadata to be the same across all environments.

First of all, I - as a dba - feel sorry to see the lack of cooperation between you and the dbas. We all need to cooperate to be successful. Clob data lengths can be less than 4000 bytes.
create table z ( a number, b clob);
Table created.
insert into z values (1, 'boe');
1 row created.
exec dbms_stats.gather_table_stats (ownname => 'ronr', tabname => 'z');
PL/SQL procedure successfully completed.
select owner, avg_row_len from dba_tables where table_name = 'Z'
SQL> /
OWNER AVG_ROW_LEN
------------------------------ -----------
RONR 109
select length(b) from z;
LENGTH(B)
----------
3
Where do you find that a clob length can not be less than 4000?

DATA_LENGTH stores the maximun # of bytes that will be taken up within the row for a column. If the CLOB can be stored in row, then the maximum is 4000. LOBS will never take up more than 4000 bytes. If in row storage is disabled, then the LOB will only store the pointer information it needs to find the LOB data, which is much less than 4000 bytes.
SQL> create table t (clob_in_table clob
2 , clob_out_of_table clob
3 ) lob (clob_out_of_table) store as (disable storage in row)
4 , lob (clob_in_table) store as (enable storage in row)
5 /
Table created.
SQL> select table_name, column_name, data_length
2 from user_tab_columns
3 where table_name = 'T'
4 /
TABLE_NAME COLUMN_NAME DATA_LENGTH
------------------------------ ------------------------------ -----------
T CLOB_IN_TABLE 4000
T CLOB_OUT_OF_TABLE 86
EDIT, adding info on *_LOBS view
Use the [DBA|ALL|USER]_LOBS view to look at the defined in row out of row storage settings:
SQL> select table_name
2 , cast(substr(column_name, 1, 30) as varchar2(30))
3 , in_row
4 from user_lobs
5 where table_name = 'T'
6 /
TABLE_NAME CAST(SUBSTR(COLUMN_NAME,1,30)A IN_
------------------------------ ------------------------------ ---
T CLOB_IN_TABLE YES
T CLOB_OUT_OF_TABLE NO
EDIT 2, some references
See LOB Storage in Oracle Database Application Developer's Guide - Large Objects for more information on defining LOB storage, especially the third note that talks about what can be changed:
Note:
Only some storage parameters can be modified. For example, you
can use the ALTER TABLE ... MODIFY LOB statement to change RETENTION,
PCTVERSION, CACHE or NO CACHE LOGGING or NO LOGGING, and the STORAGE
clause.
You can also change the TABLESPACE using the ALTER TABLE ...
MOVE statement.
However, once the table has been created, you cannot change the CHUNK
size, or the ENABLE or DISABLE STORAGE IN ROW settings.
Also, LOBs in Index Organized Tables says:
By default, all LOBs in an index organized table created without an overflow segment will be stored out of line. In other words, if an index organized table is created without an overflow segment, then the LOBs in this table have their default storage attributes as DISABLE STORAGE IN ROW. If you forcibly try to specify an ENABLE STORAGE IN ROW clause for such LOBs, then SQL will raise an error.
This explains why jonearles did not see 4,000 in the data_length column when he created the LOB in an index organized table.

CLOBs don't have a specified length. When you query ALL_TAB_COLUMNS, e.g.:
select table_name, column_name, data_length
from all_tab_columns
where data_type = 'CLOB';
You'll notice that data_length is always 4000, but this should be ignored.
The minimum size of a CLOB is zero (0), and the maximum is anything from 8 TB to 128 TB depending on the database block size.

As ik_zelf and Jeffrey Kemp pointed out, CLOBs can store less than 4000 bytes.
But why are CLOB data_lengths not always 4000? The number doesn't actually limit the CLOB, but you're probably right to worry about the metadata being
different on your servers. You might want to run DBMS_METADATA.GET_DDL on the objects on all servers and compare the results.
I was able to create a low data_length by adding a CLOB to an index organized table.
create table test
(
column1 number,
column2 clob,
constraint test_pk primary key (column1)
)
organization index;
select data_length from user_tab_cols
where table_name = 'TEST' and column_name = 'COLUMN2';
On 10.2.0.1.0, the result is 116.
On 11.2.0.1.0, the result is 476.
Those numbers don't make any sense to me and I'd guess it's a bug. But I don't have a good understanding of the different storage options, maybe I'm just missing something.
Does anybody know what's really going on here?

Related

Is it possible to add a custom metadata field to Oracle Data Dictionary?

Is it possible to add a metadata field at column-level (in the Oracle Data Dictionary)?
The purpose would be to hold a flag identifying where individual data items in a table have been anonymised.
I'm an analyst (not a DBA) and I'm using Oracle SQL Developer which surfaces (and enables querying of) the COLUMN_NAME, DATA_TYPE, NULLABLE, DATA_DEFAULT, COLUMN_ID, and COMMENTS metadata fields of our Oracle DB (see pic).
I'd be looking to add another metadata field at this level (essentially, to add a second 'COMMENTS' field) to hold the 'Anonymisation' flag, to support easy querying of our flagged-anonymised data.
If it's possible (and advisable / supportable), I'd be grateful for any advice for describing the steps required to enable this, which I can then discuss with our Developer and DBA.
Short answer: NO.
But where could you keep that information?
In your data model.
Oracle provides a free data modeling solution, Oracle SQL Developer Data Modeler. It provides the ability to mark table/view columns as sensitive or PII.
Those same models can be stored back in your database so they can be accessed via SQL.
Once you've marked up all of your sensitive attributes/columns, and store it back into the database, you can query it back out.
Disclaimer: I work for Oracle, I'm the product manager for Data Modeler.
[TL;DR] Don't do it. Find another way.
If it's advisable
NO
Never modify the data dictionary; (unless Oracle support tells you to) you are likely to invalidate your support contract with Oracle and may break the database and make it unusable.
If it's possible
Don't do this.
If you really want to try it then still don't.
If you really, really want to try it then find a database you don't care about (the don't care about bit is important!) and log on as a SYSDBA user and:
ALTER TABLE whichever_data_dictionary_table ADD anonymisation_flag VARCHAR2(10);
Then you can test whether the database breaks (and it may not break immediately but at some point later), but if it does then you almost certainly will not get any support from Oracle in fixing it.
Did we say, "Don't do it"... we mean it.
As you already know, you shouldn't do that.
But, nothing prevents you from creating your own table which will contain such an info.
For example:
SQL> CREATE TABLE my_comments
2 (
3 table_name VARCHAR2 (30),
4 column_name VARCHAR2 (30),
5 anonymisation VARCHAR2 (10)
6 );
Table created.
Populate it with some data:
SQL> insert into my_comments (table_name, column_name)
2 select table_name, column_name
3 from user_tab_columns
4 where table_name = 'DEPT';
3 rows created.
Set the anonymisation flag:
SQL> update my_comments set anonymisation = 'F' where column_name = 'DEPTNO';
1 row updated.
When you want to get such an info (along with some more data from user_tab_columns, use (outer) join:
SQL> select u.table_name, u.column_name, u.data_type, u.nullable, m.anonymisation
2 from user_tab_columns u left join my_comments m on m.table_name = u.table_name
3 and m.column_name = u.column_name
4 where u.column_name = 'DEPTNO';
TABLE_NAME COLUMN_NAME DATA_TYPE N ANONYMISATION
---------- --------------- ------------ - ---------------
DEPT DEPTNO NUMBER N F
DSV DEPTNO NUMBER N
DSMV DEPTNO NUMBER Y
EMP DEPTNO NUMBER Y
SQL>
Advantages: you won't break the database & you'll have your additional info.
Drawbacks: you'll have to maintain the table manually.

Oracle: Coercing VARCHAR2 and CLOB to the same type without truncation

In an app that supports MS SQL Server, MySQL, and Oracle, there's a table with the following relevant columns (types shown here are for Oracle):
ShortText VARCHAR2(1700) indexed
LongText CLOB
The app stores values 850 characters or less in ShortText, and longer ones in LongText. I need to create a view that returns that data, whichever column it's in. This works for SQL Server and MySQL:
SELECT
CASE
WHEN ShortText IS NOT NULL THEN ShortText
ELSE LongText
END AS TheValue
FROM MyTable
However, on Oracle, it generates this error:
ORA-00932: inconsistent datatypes: expected CHAR got CLOB
...meaning that Oracle won't implicitly convert the two columns to the same type, so the query has to do it explicitly. Don't want data to get truncated, so the type used has to be able to hold as much data as a CLOB, which as I understand it (not an Oracle expert) means CLOB, only, no other choices are available.
This works on Oracle:
SELECT
CASE
WHEN ShortText IS NOT NULL THEN TO_CLOB(ShortText)
ELSE LongText
END AS TheValue
FROM MyTable
However, performance is amazingly awful. A query that returns LongText directly took 70-80 ms for about 9k rows, but the above construct took between 30 and 60 seconds, unacceptable.
So:
Are there any other Oracle types I could coerce both columns to
that can hold as much data as a CLOB? Ideally something more
text-oriented, like MySQL's LONGTEXT, or SQL Server's NTEXT (or even
better, NVARCHAR(MAX))?
Any other approaches I should be looking at?
Some specifics, in particular ones requested by #Guido Leenders:
Oracle version: Oracle Database 11g 11.2.0.1.0 64bit Production
Not certain if I was the only user, but the relative times are still striking.
Stats for the small table where I saw the performance I posted earlier:
rowcount: 9,237
varchar column total length: 148,516
clob column total length: 227,020
The to_clob is pretty expensive, so try to avoid it. But I think it should perform reasonable well for 9K rows. Following test case based upon one of the applications we develop which has the similar datamodel behaviour:
create table bubs_projecten_sample
( id number
, toelichting varchar2(1700)
, toelichting_l clob
)
begin
for i in 1..10000
loop
insert into bubs_projecten_sample
( id
, toelichting
, toelichting_l
)
values
( i
, case when mod(i, 2) = 0 then 'short' else null end
, case when mod(i, 2) = 0 then rpad('long', i, '*') else null end
)
;
end loop;
commit;
end;
Now make sure everything in cache and dirty blocks written out:
select *
from bubs_projecten_sample
Test performance:
create table bubs_projecten_flat
as
select id
, to_clob(toelichting) toelichting_any
from bubs_projecten_sample
where toelichting is not null
union all
select id
, toelichting_l
from bubs_projecten_sample
where toelichting_l is not null
The create table take less than 1 second on a normal entry level server, including writing out the data, 17K consistent gets, 4K physical reads. Stored on disk (note the rpad) is 25K for toelichting and 16M for toelichting_l.
Can you further elaborate on the problem?
Please check that large CLOBs are not stored inline. Normally large CLOBs are stored in a separate system-maintained table. Storing large CLOBs inside a table can make going through the table with a Full Table Scan expensive.
Also, I can imagine populating both columns always. You still have the benefits of indexing working for the first so many characters. You just need to memorize in the table using an indicator whether the CLOB or the shortText column is leading.
As a side note; I see a difference between 850 and 1700. I would recommend making them equal, but remember to check that you are creating the table using character semantics. That can be done on statement level by using: "varchar2(850 char)". Please note that Oracle will actually create a column that fits 850 * 4 bytes (in AL32UTF8 at least, there the "32" stands for "4 bytes at most per character"). Good luck!

Why is Oracle losing data during commit?

I have a fairly standard SQL Query as follows:
TRUNCATE TABLE TABLE_NAME;
INSERT INTO TABLE_NAME
(
UPRN,
SAO_START_NUMBER,
SAO_START_SUFFIX,
SAO_END_NUMBER,
SAO_END_SUFFIX,
SAO_TEXT,
PAO_START_NUMBER,
PAO_START_SUFFIX,
PAO_END_NUMBER,
PAO_END_SUFFIX,
PAO_TEXT,
STREET_DESCRIPTOR,
TOWN_NAME,
POSTCODE,
XY_COORD,
EASTING,
NORTHING,
ADDRESS
)
SELECT
BASIC_LAND_AND_PROPERTY_UNIT.UPRN,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_NUMBER AS SAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_SUFFIX AS SAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_NUMBER AS SAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_SUFFIX AS SAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_TEXT AS SAO_TEXT,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_NUMBER AS PAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_SUFFIX AS PAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_NUMBER AS PAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_SUFFIX AS PAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_TEXT AS PAO_TEXT,
STREET_DESCRIPTOR.STREET_DESCRIPTOR AS STREET_DESCRIPTOR,
STREET_DESCRIPTOR.TOWN_NAME AS TOWN_NAME,
LAND_AND_PROPERTY_IDENTIFIER.POSTCODE AS POSTCODE,
BASIC_LAND_AND_PROPERTY_UNIT.GEOMETRY AS XY_COORD,
BASIC_LAND_AND_PROPERTY_UNIT.X_COORDINATE AS EASTING,
BASIC_LAND_AND_PROPERTY_UNIT.Y_COORDINATE AS NORTHING,
decode(SAO_START_NUMBER,null,null,SAO_START_NUMBER||SAO_START_SUFFIX||' ')
||decode(SAO_END_NUMBER,null,null,SAO_END_NUMBER||SAO_END_SUFFIX||' ')
||decode(SAO_TEXT,null,null,SAO_TEXT||' ')
||decode(PAO_START_NUMBER,null,null,PAO_START_NUMBER||PAO_START_SUFFIX||' ')
||decode(PAO_END_NUMBER,null,null,PAO_END_NUMBER||PAO_END_SUFFIX||' ')
||decode(PAO_TEXT,null,null,'STREET RECORD',null,PAO_TEXT||' ')
||decode(STREET_DESCRIPTOR,null,null,STREET_DESCRIPTOR||' ')
||decode(POST_TOWN,null,null,POST_TOWN||' ')
||Decode(Postcode,Null,Null,Postcode) As Address
From (Land_And_Property_Identifier
Inner Join Basic_Land_And_Property_Unit
On Land_And_Property_Identifier.Uprn = Basic_Land_And_Property_Unit.Uprn)
Inner Join Street_Descriptor
On Land_And_Property_Identifier.Usrn = Street_Descriptor.Usrn
Where Land_And_Property_Identifier.Postally_Addressable='Y';
If I run this query in SQL Developer, it runs fine with 1.8million features inserted (select count(*) from TABLE_NAME within the session confirms this).
But when I run the commit, the data disappears! select count(*) from TABLE_NAME now returns 0 results.
We've done a number of things to try and see what's going on:
During the Truncate, tablespace is freed up, and during the insert its filled again. There is no change during the commit. This implies the data is in the database.
If I do the exact same query but with and rownum < 100 appended to the end, the commit works. Same with 1000.
I found this question: oracle commit kills and had our DBA try the "SQL Trace". This produced a >4GB file which when parsed with TKPROF produced a 120 page report but we don't know how to read it and there's nothing obviously wrong in there.
Our error logs have nothing in them. And obviously no error during the commit itself.
There's a trigger/sequence which does increment by 1.8million during the process.
I've repeated this about 4 times now, but the result is always the same.
So my question is simple - what's happening to the data during the commit? How can we find out? Thanks.
Note: This has run fine in the past so I don't believe there's anything wrong with the SQL per-se.
Edit: Issue resolved by recreating the table from scratch. Now when I insert it only takes 500 seconds compared to the previous 2000. And commiting is instantaneous; when it was broken the commit took 4000 seconds!
I still have no idea why it happened though.
For those asking, the Create Table syntax:
CREATE TABLE TABLE_NAME
(
ADDRESS VARCHAR2(4000),
UPRN NUMBER(12),
SAO_START_NUMBER NUMBER(4),
SAO_START_SUFFIX VARCHAR2(1),
SAO_END_NUMBER NUMBER(4),
SAO_END_SUFFIX VARCHAR2(1),
SAO_TEXT VARCHAR2(90),
PAO_START_NUMBER NUMBER(4),
PAO_START_SUFFIX VARCHAR2(1),
PAO_END_NUMBER NUMBER(4),
PAO_END_SUFFIX VARCHAR2(1),
PAO_TEXT VARCHAR2(90),
STREET_DESCRIPTOR VARCHAR2(100),
TOWN_NAME VARCHAR2(30),
POSTCODE VARCHAR2(8),
XY_COORD MDSYS.SDO_GEOMETRY,
EASTING NUMBER(7),
NORTHING NUMBER(7)
)
CREATE INDEX TABLE_NAME_ADD_IDX ON TABLE_NAME (ADDRESS);
Do you still lose the data if you wrap the transaction in an anonymous block?
My guess is that you are opening two SQL windows in SQL Developer and that means two separate sessions. i.e. Running SQL code in window 1 and doing commit; in window 2 will not commit changes done in window 1.
Truncate table does an implicit commit. So the table will be empty until insert + commit finishes.
begin
execute immediate 'truncate table table_name reuse storage'; --use "reuse" if you know the data will be of similar size
-- implicit commit has occured and the table is empty for all sessions
insert into table_name (lots)
select lots from table2;
commit;
end;
You should use truncate with reuse storage, so that the database doesn't go a free all the blocks just to acquire the same number of blocks in the insert.
If you want/need to have the data available at all times a better (but longer) method is
begin
savepoint letsgo;
delete from table_name;
insert into table_name (lots)
select lots from table2;
commit;
exception
when others then
rollback to letsgo;
end;
Probably you had a trigger which you didn't noticed. Can you check the oracle's recyclebin table which might be storing the history of your dropped table and trigger?
Select * from recyclebin;
References : http://www.oraclebin.com/2012/12/recyclebinflashback.html

Dropping multiple columns: PLSQL and user_tab_cols

I have a table TABLE_X and it has multiple columns beginning with M_ characters which needs to be dropped. I decided to use the following PLSQL code to drop almost 100 columns beginning with M_ characters. Is it a good employment of dynamic sql and cursors? Can it be better? I didn't know more simple way since ALTER TABLE ... DROP COLUMN doesn't allow subquery to specify multiple column names.
declare
rcur sys_refcursor;
cn user_tab_cols.column_name%type;
begin
open rcur for select column_name from user_tab_cols where table_name='TABLE_X' and column_name LIKE 'M_%';
loop
fetch rcur into cn;
exit when rcur%NOTFOUND;
execute immediate 'alter table TABLE_X drop column '||cn;--works great
execute immediate 'alter table TABLE_X drop column :col'using cn;--error
end loop;
close rcur;
end;
Also. Why is it impossible to use 'using cn'?
This is a reasonable use of dynamic SQL. I would seriously question an underlying data model that has hundreds of columns in a single table that start with the same prefix and all need to be dropped. That implies to me that the data model itself is likely to be highly problematic.
Even using dynamic SQL, you cannot use bind variables for column names, table names, schema names, etc. Oracle needs to know at parse time what objects and columns are involved in a SQL statement. Since bind variables are supplied after the parse phase, however, you cannot specify a bind variable that changes what objects and/or columns a SQL statement is affecting.
The syntax for dropping multiple columns in a single alter statement is this:
SQL> desc t42
Name Null? Type
----------------------------------------- -------- ----------------------
COL1 NUMBER
COL2 DATE
COL3 VARCHAR2(30)
COL4 NUMBER
SQL> alter table t42 drop (col2, col3)
2 /
Table altered.
SQL> desc t42
Name Null? Type
----------------------------------------- -------- ----------------------
COL1 NUMBER
COL4 NUMBER
SQL>
So, if you really need to optimize the operation, you'll need to build up the statement incrementally - or use a string aggregation technique.
However, I would question whether you ought to be running a statement like this often enough to need to optimize it.

What is the maximum length of a table name in Oracle?

What are the maximum length of a table name and column name in Oracle?
In Oracle 12.2 and above the maximum object name length is 128 bytes.
In Oracle 12.1 and below the maximum object name length is 30 bytes.
Teach a man to fish
Notice the data-type and size
>describe all_tab_columns
VIEW all_tab_columns
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
COLUMN_NAME NOT NULL VARCHAR2(30)
DATA_TYPE VARCHAR2(106)
DATA_TYPE_MOD VARCHAR2(3)
DATA_TYPE_OWNER VARCHAR2(30)
DATA_LENGTH NOT NULL NUMBER
DATA_PRECISION NUMBER
DATA_SCALE NUMBER
NULLABLE VARCHAR2(1)
COLUMN_ID NUMBER
DEFAULT_LENGTH NUMBER
DATA_DEFAULT LONG
NUM_DISTINCT NUMBER
LOW_VALUE RAW(32)
HIGH_VALUE RAW(32)
DENSITY NUMBER
NUM_NULLS NUMBER
NUM_BUCKETS NUMBER
LAST_ANALYZED DATE
SAMPLE_SIZE NUMBER
CHARACTER_SET_NAME VARCHAR2(44)
CHAR_COL_DECL_LENGTH NUMBER
GLOBAL_STATS VARCHAR2(3)
USER_STATS VARCHAR2(3)
AVG_COL_LEN NUMBER
CHAR_LENGTH NUMBER
CHAR_USED VARCHAR2(1)
V80_FMT_IMAGE VARCHAR2(3)
DATA_UPGRADED VARCHAR2(3)
HISTOGRAM VARCHAR2(15)
DESCRIBE all_tab_columns
will show a TABLE_NAME VARCHAR2(30)
Note VARCHAR2(30) means a 30 byte limitation, not a 30 character limitation, and therefore may be different if your database is configured/setup to use a multibyte character set.
Mike
Right, but as long as you use ASCII characters even a multibyte character set would still give a limitation of exactly 30 characters... so unless you want to put hearts and smiling cats in you're DB names your fine...
Updated: as stated above, in Oracle 12.2 and later, the maximum object name length is now 128 bytes.
The rest of this post applied to Oracle 12.1 and below: the limit was then 30 char (bytes, really).
But do not take my word for it; try this for yourself (on Oracle 12.1 or below):
SQL> create table I23456789012345678901234567890 (my_id number);
Table created.
SQL> create table I234567890123456789012345678901(my_id number);
ERROR at line 1:
ORA-00972: identifier is too long
The schema object naming rules may also be of some use:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements008.htm#sthref723
On Oracle 12.2, you can use built-in constant, ORA_MAX_NAME_LEN,
set to 128 bytes (as per 12.2)
Prior to Oracle 12.1 max size was 30 bytes.
In the 10g database I'm dealing with, I know table names are maxed at 30 characters. Couldn't tell you what the column name length is (but I know it's > 30).
The maximum name size is 30 characters because of the data dictionary which allows the storage only for 30 bytes
Oracle database object names maximum length is 30 bytes.
Object Name Rules:
http://docs.oracle.com/database/121/SQLRF/sql_elements008.htm
I'm working on Oracle 12c 12.1. However, doesn't seem like it allows more than 30 characters for column/table names.
Read through an oracle page which mentions 30 bytes.
https://docs.oracle.com/database/121/SQLRF/sql_elements008.htm#SQLRF00223
In 12c although the all_tab_columns do say VARCHAR2(128) for Table_Name, it does not allow more than 30 bytes name.
Found another article about 12c R2, which seems to be allowing this up to 128 characters.
https://community.oracle.com/ideas/3338
The maximum length of the table and column name is 128 bytes or 128 characters.
This limit is for using sybase database users. I verified this answer thoroughly, so that I have posted this answer confidently.

Resources