oracle 19c: set password of 14 characters and alphanumeric - oracle

I have created a CBD and a PDB inside this one. And I want to create a user in the PDB with a password of 14 digits and alphanumeric.
I get an error when using alphanumeric and long-length passwords.
The command I am using to create a new PDB is as follows.
CREATE PLUGGABLE DATABASE pdb_name ADMIN USER pdb_database_admin_user IDENTIFIED BY pdb_database_admin_user_password
FILE_NAME_CONVERT = (pdbseed_location, new_pdb_location);
The command that you just use to create a CDB is as follows:
dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname CDB1 -sid CDB1 -responseFile NO_VALUE
-characterSet AL32UTF8 \
-sysPassword SysPassword1 \
-systemPassword SysPassword1 \
-createAsContainerDatabase true \
-databaseType MULTIPURPOSE \
-memoryMgmtType auto_sga \
-totalMemory 2000 \
-storageType FS \
-datafileDestination "${DATA_DIR}" \
-redoLogFileSize 50 \
-emConfiguration NONE \
-ignorePreReqs
Do I have to edit a profile?

You say "set password of 14 characters".
But in the sample code you posted, the password is
pdb_database_admin_user_password
Not sure where the disconnect is; do you not understand that the string in the IDENTIFIED BY clause is the password you are creating, or do you have a problem with counting?
If it's the latter, I have a simple solution: don't count. Use the length() function instead (even on a free SQL engine online, there are many):
select length('pdb_database_admin_user_password') as password_length
from dual;
PASSWORD_LENGTH
---------------
32
(Of course, if you can count carefully, you don't need a query like this; the count is indeed 32 characters.)
Now, the problem is that passwords are limited to 30 bytes - which means at most 30 characters.
https://docs.oracle.com/en/database/oracle/oracle-database/19/dbseg/configuring-authentication.html#GUID-AA1AA635-1CD5-422E-B8CA-681ED7C253CA
Does that answer your question?

Related

Add timestamp to name of created VM's instance in bash script

I deploy some VM's instances in my cloud infrastructure with bash script:
#!/bin/bash
instance_name="vm"
# create instance
yc compute instance create \
--name $instance_name \
--hostname reddit-app \
--memory=2 \
...
I need to add timestamp to instance's name in format vm-DD-MM_YYYY-H-M-S.
For debug I tried to set value instance_name=$(date +%d-%m-%Y_%H-%M-%S) but got the error:
ERROR: rpc error: code = InvalidArgument desc = Request validation error: Name: invalid resource name
Any help would be appreciated.
The Yandex Cloud documentation says:
"The name may contain lowercase Latin letters, numbers, and hyphens. The first character must be a letter. The last character can't be a hyphen. The maximum length of the name is 63 characters".
I changed my script following the recommendations and it works now:
#!/bin/bash
instance_name="vm-$(date +%d-%m-%Y-%H-%M-%S)"
# create instance
yc compute instance create \
--name $instance_name \
--hostname reddit-app \
--memory=2 \
...

Pull special character using Sqoop

I am stuck at one point with sqoop.
In my source I have one column which have one special character. But when I am pulling the data with sqoop, the special character is changed to something else.
In my source oracle table i have :-
jan 2005 �DSX�
but when it is sqooping data to hive table, it changed the special character to something else
jan 2005 �DSXÙ
Please suggest some solution so that I get exact same special character as it is in source (Oracle) table.
sqoop import \
--connect "jdbc:oracle:thin:#source connection details" \
--connection-manager org.apache.sqoop.manager.OracleManager \
--username abc \
--password xyz \
--fields-terminated-by '\001' \
--null-string '' \
--null-non-string '' \
--query "select column_name from wxy.ztable where \$CONDITIONS " \
--target-dir "db/dump/dir" \
--split-by "col1" \
-m 1
if you are seeing jan 2005 �DSX� this in your oracle table, probably your encoding for oracle table is also not set correctly. I don't have much experience with oracle so won't be able to tell you how to check, however you can check with your oracle DBA.
One thing I can tell you is, Hadoop using UTF-8 encoding, so you first need to convert your oracle to UTF-8 and then import the data.

Error in TiDB: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation`

When I was using Sqoop to write data into TiDB in batches, I ran into the following error:
java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation
I configured the --batch option already, but this error still occurred. How to resolve this error?
In Sqoop, --batch means committing 100 statements in each batch, but by default each statement contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction.
Two solutions:
Add the -Dsqoop.export.records.per.statement=10 option as follows:
sqoop export \
-Dsqoop.export.records.per.statement=10 \
--connect jdbc:mysql://mysql.example.com/sqoop \
--username sqoop ${user} \
--password ${passwd} \
--table ${tab_name} \
--export-dir ${dir} \
--batch
You can also increase the limited number of statements in a single TiDB transaction, but this will consume more memory.

sqoop export commands for the data which has spaces before in hdfs

i have data which has stored in hdfs, the data has space before and after of the value, when i try to export to mysql, it gives numberformat exception but when i create data without space, it has inserted into mysql successfully.
my question is can't we export the data which has space from hdfs to mysql usong sqoop export command?
The data which i used
1201, adi, sen manager, 30000, it
1201, pavan, jun manager, 5000, cs
1203, santhosh, junior, 60000, mech
i created table like
create table emp(id BIGINT,name varchar(20),desg varchar(20),salary BIGINT,dept varchar(20));
sqoop command -- sqoop export \
--connect jdbc:mysql://127.0.0.1/mydb \
--username root \
--table emp \
--m 1 \
--export-dir /mydir \
--input-fields-terminated-by ',' \
--input-lines-terminated-by '\n'
result: numberformatexception input string:'1201'
can't parse the data
i discussed in forum, they said trim the space but i wants to know that automatically trim the spaces while perform sqoop export.
can somebody give suggestions on this?
You can do 1 simple thing:
Create temporary table in MySQL with all VARCHAR
create table emp-temp(id BIGINT,name varchar(20),desg varchar(20),salary BIGINT,dept varchar(20));
Now create another with numeric fields after TRIM() and CAST()
create table emp as select CAST(TRIM(id) AS UNSIGNED), name, desg, CAST(TRIM(salary) AS UNSIGNED), dept FROM emp_temp;
Sqoop internally runs MapReduce job.
Simple solution is to run a Mapper and trim the spaces in your data and get the output in different file and run sqoop export on new file.

Can I set the TTL for documents loaded into Couchbase from HDFS using Sqoop?

I am attempting to load a JSON document from Hadoop HDFS into Couchbase using sqoop. I am able to load the documents correctly, but the TTL of the document is 0. I would like to expire the documents over a period of time and not have them live forever. Is that possible with the Couchbase connector for Sqoop?
As I said, the documents are loaded correctly, just without a TTL.
The document looks like this:
key1#{"key": "key1", "message": "A message here"}
key2#{"key": "key2", "message": "Another message"}
The sqoop call looks like this:
sqoop export -D mapred.map.child.java.opts="-Xmx4096m" \
-D mapred.job.map.memory.mb=6000 \
--username ${COUCHBASE_BUCKET} \
--password-file ${COUCHBASE_PASSWORD_FILE} \
--table ignored \
--connect ${COUCHBASE_URL} \
--export-dir ${INPUT_DIR} \
--verbose \
--input-fields-terminated-by '#' \
--lines-terminated-by '\n' \
-m 2
Thank you for your help.
I do not think there's a straightforward UI/settings to do it. The code would have to be modified within the connector.
There is no TTL option in the current sqoop plugin version. However, if you just want to set the same TTL for all the imported objects, you can quite easily add the code yourself. Take a look at line 212 here: https://github.com/couchbase/couchbase-hadoop-plugin/blob/master/src/java/com/couchbase/sqoop/mapreduce/db/CouchbaseOutputFormat.java#L212
You just need to add a TTL parameter to the set calls. If you want to be thorough about it, you can take the TTL value from the command line and put it in the DB configuration object, so you can use it in code.

Resources