Checking availability of DB2 database via db2cli - shell

I try to check the availability of a DB2 Instance via the db2cli-utility, as follows
db2cli execsql -user USER -passwd PASSWD -connstring DATABASE:HOST:PORT
(with actual values for the uppercased text). I would expect this to connect to HOST:PORT, using the credentials USER and PASSWD, and to switch to database DATABASE.
As a result i get
SQLError: rc = 0 (SQL_SUCCESS)
SQLGetDiagRec: SQLState : 08001
fNativeError : -1024
szErrorMsg : [IBM][CLI Driver] SQL1024N A database connection does not exist. SQLSTATE=08003
cbErrorMsg : 82
But: these values WORK, on the same machine, if i use them as credentials in applications that connect to DB2, so i would expect that i get a connection with the given command.
My Question is: am i using db2cli wrong?

You are using wrong connection string as well as options. Check correct command syntax by running "db2cli execsql -help" command.
You can use -user and -passwd option with -dsn option only. If you are using connection string, then uid and pwd should be part of -connstring option value. Also, the syntax of connection string is wrong. It must be a pair of keyword and value separated by semicolon and enclosed by quotes like "key1=val1;key2=val2;key3=val3". The correct command that you should use is:
db2cli execsql -connstring "DATABASE=dbname;HOSTNAME=hostname;PORT=portnumber;UID=userid;PWD=passwd"
The output for me is as below:
$ db2cli execsql -connstring "database=bluemix;hostname=192.168.1.20;port=50000;uid=myuid;pwd=mydbpassword"
IBM DATABASE 2 Interactive CLI Sample Program
(C) COPYRIGHT International Business Machines Corp. 1993,1996
All Rights Reserved
Licensed Materials - Property of IBM
US Government Users Restricted Rights - Use, duplication or
disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
> select 'bluemix' from sysibm.sysdummy1
select 'bluemix' from sysibm.sysdummy1
FetchAll: Columns: 1
1
bluemix
FetchAll: 1 rows fetched.
> quit
$
To know the instance name, you should run db2level command.
$ db2level
DB21085I This instance or install (instance name, where applicable: "bimaljha") uses
"64" bits and DB2 code release "SQL10054" with level identifier "0605010E".
Informational tokens are "DB2 v10.5.0.4", "s140813", "IP23623", and Fix Pack "4".
Product is installed at "/home/bimaljha/sqllib".

you can try validate connect like below(it will make you sure if connection is successful)
db2cli validate -dsn sample -connect
db2cli.ini :
[sample]
hostname=host
pwd=password
port=portnumber
PROTOCOL=TCPIP
database=dbname
uid=username

Related

Perforce - how to prevent "p4 client" from creating a client when the template form is not saved?

The Perforce documentation for p4 client <no args> states:
The p4 client command puts the client spec into a temporary file and
invokes the editor configured by the environment variable P4EDITOR.
For new workspaces, the client name defaults to the P4CLIENT
environment variable, if set, or to the current host name. Saving the
file creates or modifies the client spec.
What I am seeing on our network is that the client is created no matter what, even when I exit without saving.
Ex.
[cad_test_user#sws-cab9-0 ~]$ pwd
/home/cad_test_user
[cad_test_user#sws-cab9-0 ~]$ env | grep P4
P4EDITOR=
P4PORT=tcp:p4p:1666
P4DIFF=tkdiff
P4CONFIG=.p4config
P4IGNORE=.ignore
P4USER=cad_test_user
[cad_test_user#sws-cab9-0 ~]$ p4 clients | grep sws-cab9-0
[cad_test_user#sws-cab9-0 ~]$ p4 client
Client: sws-cab9-0
Owner: cad_test_user
Host: sws-cab9-0.aus5.mythic-ai.com
Client sws-cab9-0 saved.
Root: /home/cad_test_user
Options: noallwrite noclobber nocompress unlocked nomodtime normdir
SubmitOptions: submitunchanged
LineEnd: local
View:
<quit without save>
Client sws-cab9-0 saved.
[cad_test_user#sws-cab9-0 ~]$ p4 clients | grep sws-cab9-0
Client sws-cab9-0 2021/04/06 root /home/cad_test_user 'Created by cad_test_user. '
Now as another user outside of a .p4config hierarhchy, I get an unexpected value for %clientroot%:
[cad_test_user#sws-cab9-0 /]$ p4 -F %clientRoot% -ztag info
/home/cad_test_user
I am wondering if there is something wrong with our default settings; why is the client created and saved even without a write? Ideally, I'd want to manage the default specification to some degree, like:
synthesize the client name so that it is never the hostname, like c:$USER:foo
Not have a "Host:"
define the "Root:" to be somewhere personal
Not create the client unless the user does a write-quit!
Thanks for your answers!
Set up a trigger (a form-save trigger on the client form) that rejects a client which doesn't meet your criteria. It's hard to enforce #4 directly, but as long as at least one of your other criteria is something that requires the form to be edited, it's handled well enough indirectly.
Note that you can pair your form-save trigger with a form-out trigger that modifies the default client form -- you could for example replace Root with an obviously invalid field like --ENTER SOMETHING PERSONALIZED HERE-- and then make sure your form-save trigger rejects it. The Perforce sys admin guide has some nice simple example triggers, one of which demonstrates customizing client spec defaults: https://www.perforce.com/manuals/p4sag/Content/P4SAG/scripting.triggers.forms.out.html
On your criteria #2, I would recommend against this unless you're in an environment where it's commonplace for multiple host machines to share a single filesystem. The default Host guardrails are there to keep you from confusing yourself (and possibly losing data) by reusing a client spec in ways that throw the workspace state out of whack.

cPanel - MariaDB - Update field in multiples databases

We have a cPanel account with several databases.
Some of these databases have a common field, type text, as a snippet.
One of these databases is the MASTER. What we would like to do is to update this field in 1 table of our MASTER databases and update the same value in the rest of the databases. In all databases, the name of the table and the field is the same.
We have tried doing this connecting to DB by shell script for having the new value and later try to update the rest of the databases. When que save the value of the field, it doesn't safe the right value.
In an example:
FILE query.txt
SELECT snippet FROM wp_hfcm_scripts where snippet like '%myvalue%';
If we connect DB and run, then ok:
mysql -D mybbdd Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 941167 Server version:
10.3.25-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [mybbdd]> SELECT snippet FROM wp_hfcm_scripts where snippet like '%myvalue%';
--------------------------------------------------------------------------------------+ |
<script> //myvalue 20201007
jQuery(document).ready(function($){ . . .
But if we directly launch the query in the command line, then it responds with a strange format:
mysql -D mybbdd < query.sql
snippet
\n\t\tjQuery("#hero-responsive > p, #hero-responsive > br, #info-responsive > p, #info-responsive > br, #checkout-responsive > p, #checkout-responsive > br, #right-checkout-responsive > p, #er\n \t\t\tjQuery(element).append("<div class='video-checkout'><div
class=''><div
.
.
.
Does anyone know why this may be happening?

Consuming function module with SAP Netweaver RFC SDK in Bash

I'm trying to make a request to a function in a SAP RFC server hosted at 10.123.231.123 with user myuser, password mypass, sysnr 00, client 076, language E. The name of the function is My_Function_Nm with params: string Alternative, string Date, string Name.
I use the command line:
/usr/sap/nwrfcsdk/bin/startrfc -h 10.123.231.123 -s 00 -u myuser -p mypass -c 076 -l en -F My_Function_Nm
But it always shows me the help instructions.
I guess I'm not specifying the -E pathname=edifile, and it's because i don't know how to create a EDI File to include the parameters values to the specified function. Maybe someone can help me on how to create this file and how to correctly invoke startrfc to consume from this function?
Thanks in advance.
If you actually check the help text the problem shows, you should find the following passages:
RFC connection options:
[...]
-2 SNA mode on.
You must set this if you want to connect to R/2.
[...]
-3 R/3 mode on.
You must set this if you want to connect to R/3.
Apparently you forgot to specify -3...
You should use sapnwrfc.ini which will store your connection parameters, and it should be places in the same directory as client program.
Sample file for your app should be following:
DEST=TST1
ASHOST=10.123.231.123
USER=myuser
PASSWD=mypass
SYSNR=076
RFC_TRACE=0
Documentation on using this file is here.
For calling the function you must create Bash-script, but better to use Python script.

How to configure xemacs to recognize database specified in .sql-mode?

I am running xemacs with a .sql-mode file containing the following:
1 (setq sql-association-alist
2 '(
3 ("XDBST (mis4) " ("XDBST" "xsius" "password"))
4 ("dev " ("DEVTVAL1" "xsi" "password" "devbilling"))
5 ))
When I log in to the database in xemacs by selecting Utilities->Interactive Mode->Use Association, it logs me in but it does not pick up the database parameter. For example, when I log in to "dev", it logs me in but then when I do "select db_name()" it yields csdb instead of devbilling. It appears that it is picking up the default database associated with the user and ignoring the database parameter. How do you configure xemacs so that it picks up the database parameter specified in .sql-mode when the option is selected?
Thanks,
Mike
I did some more research and xeamcs is using sql-mode.el which on my system is in /usr/local/xemacs/lisp/sql-mode.el to login with SQL Mode. The code in the file does not use the database specified in .sql-mode in Interactive Mode. It does, however, use the database specified in .sql-mode in Batch Mode. You can use Batch Mode as a workaround.

How to determine the Schemas inside an Oracle Data Pump Export file

I have an Oracle database backup file (.dmp) that was created with expdp.
The .dmp file was an export of an entire database.
I need to restore 1 of the schemas from within this dump file.
I don't know the names of the schemas inside this dump file.
To use impdp to import the data I need the name of the schema to load.
So, I need to inspect the .dmp file and list all of the schemas in it, how do I do that?
Update (2008-09-18 13:02) - More detailed information:
The impdp command i'm current using is:
impdp user/password#database directory=DPUMP_DIR
dumpfile=EXPORT.DMP logfile=IMPORT.LOG
And the DPUMP_DIR is correctly configured.
SQL> SELECT directory_path
2 FROM dba_directories
3 WHERE directory_name = 'DPUMP_DIR';
DIRECTORY_PATH
-------------------------
D:\directory_path\dpump_dir\
And yes, the EXPORT.DMP file is in fact in that folder.
The error message I get when I run the impdp command is:
Connected to: Oracle Database 10g Enterprise Edition ...
ORA-31655: no data or metadata objects selected for the job
ORA-39154: Objects from foreign schemas have been removed from import
This error message is mostly expected. I need the impdp command be:
impdp user/password#database directory=DPUMP_DIR dumpfile=EXPORT.DMP
SCHEMAS=SOURCE_SCHEMA REMAP_SCHEMA=SOURCE_SCHEMA:MY_SCHEMA
But to do that, I need the source schema.
impdp exports the DDL of a dmp backup to a file if you use the SQLFILE parameter. For example, put this into a text file
impdp '/ as sysdba' dumpfile=<your .dmp file> logfile=import_log.txt sqlfile=ddl_dump.txt
Then check ddl_dump.txt for the tablespaces, users, and schemas in the backup.
According to the documentation, this does not actually modify the database:
The SQL is not actually executed, and the target system remains unchanged.
If you open the DMP file with an editor that can handle big files, you might be able to locate the areas where the schema names are mentioned. Just be sure not to change anything. It would be better if you opened a copy of the original dump.
Update (2008-09-19 10:05) - Solution:
My Solution: Social engineering, I dug real hard and found someone who knew the schema name.
Technical Solution: Searching the .dmp file did yield the schema name.
Once I knew the schema name, I searched the dump file and learned where to find it.
Places the Schemas name were seen, in the .dmp file:
<OWNER_NAME>SOURCE_SCHEMA</OWNER_NAME>
This was seen before each table name/definition.
SCHEMA_LIST 'SOURCE_SCHEMA'
This was seen near the end of the .dmp.
Interestingly enough, around the SCHEMA_LIST 'SOURCE_SCHEMA' section, it also had the command line used to create the dump, directories used, par files used, windows version it was run on, and export session settings (language, date formats).
So, problem solved :)
Assuming that you do not have the log file from the expdp job that generated the file in the first place, the easiest option would probably be to use the SQLFILE parameter to have impdp generate a file of DDL (based on a full import). Then you can grab the schema names from that file. Not ideal, of course, since impdp has to read the entire dump file to extract the DDL and then again to get to the schema you're interested in, and you have to do a bit of text file searching for the various CREATE USER statements, but it should be doable.
The running the impdp command to produce an sqlfile, you will need to run it as a user which has the DATAPUMP_IMP_FULL_DATABASE role.
Or... run it as a low privileged user and use the MASTER_ONLY=YES option, then inspect the master table. e.g.
select value_t
from SYS_IMPORT_TABLE_01
where name = 'CLIENT_COMMAND'
and process_order = -59;
col object_name for a30
col processing_status head STATUS for a6
col processing_state head STATE for a5
select distinct
object_schema,
object_name,
object_type,
object_tablespace,
process_order,
duplicate,
processing_status,
processing_state
from sys_import_table_01
where process_order > 0
and object_name is not null
order by object_schema, object_name
/
http://download.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/oow2011_dp_mastering.pdf
Step 1: Here is one simple example. You have to create a SQL file from the dump file using SQLFILE option.
Step 2: Grep for CREATE USER in the generated SQL file (here tables.sql)
Example here:
$ impdp directory=exp_dir dumpfile=exp_user1_all_tab.dmp logfile=imp_exp_user1_tab sqlfile=tables.sql
Import: Release 11.2.0.3.0 - Production on Fri Apr 26 08:29:06 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 08:29:12
$ grep "CREATE USER" tables.sql
CREATE USER "USER1" IDENTIFIED BY VALUES 'S:270D559F9B97C05EA50F78507CD6EAC6AD63969E5E;BBE7786A5F9103'
Lot of datapump options explained here http://www.acehints.com/p/site-map.html
You need to search for OWNER_NAME.
cat -v dumpfile.dmp | grep -o '<OWNER_NAME>.*</OWNER_NAME>' | uniq -u
cat -v turn the dumpfile into visible text.
grep -o shows only the match so we don't see really long lines
uniq -u removes duplicate lines so you see less output.
This works pretty well, even on large dump files, and could be tweaked for usage in a script.
My solution (similar to KyleLanser's answer) (on a Unix box):
strings dumpfile.dmp | grep SCHEMA_LIST
In my case, based on Aldur's and slafs' answers I came up with this expression that should tell you just the name of the original schema:
cat -v file.dmp | grep 'SCHEMA_LIST' | uniq -u | grep -o -P '(?<=SCHEMAS\=).*(?=content)'
Tested for a DMP file from Oracle 19.8 version.

Resources