IBM Websphere - exception in wsadmin script for AutoDeployment - websphere

I'm trying to write a simple Jython script to AutoDeploy a web application on the IBM Websphere Application Server. However, I'm a novice in Python, so I can't uderstand, why do I become the following Error:
WASX7209I: Connected to process "dmgr" on node was7CellManager01 using SOAP connector; The type of process is: DeploymentManager
WASX7017E: Exception received while running file "deploy_test.py"; exception information: com.ibm.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 14, in ?
TypeError: sequence subscript must be integer or slice
My script:
appname='name'
source='app.ear'
nodeName='was7Node01'
cell='was7Cell01'
server='server1'
contextRoot='/deploymenttest'
# 1. node
# 2. cell
# 3. server
# 4. Application Name
# 5. ContextRoot
# 5. JNDI target name
attrs = [
'-node ', nodeName,
' -cell ', cell,
' -server ', server,
' -appname ', appname,
' -CtxRootForWebMod ', contextRoot,
' -MapResRefToEJB ', [
[
appname,"",
source+',WEB-INF/web.xml',
'jdbc/appdb','javax.sql.DataSource',
'jbdc/app22','DefaultPrincipalMapping',
'was7CellManager01/db2inst1',""
]
[
appname,"",
source+',WEB-INF/web.xml',
'jdbc/app1db','javax.sql.DataSource',
'jbdc/app22','DefaultPrincipalMapping',
'was7CellManager01/db2inst1',""
]
]
]
AdminApp.install(source, attrs)
Any ideas?
Thank you very much in advance.

There is a missing coma between both of your MapResRefToEJB values # line 27.

Related

Get current restart policy state (nodeRestartState) of AppServer using Jython

I want to get the current restart policy of an AppServer (RUNNING, STOPPED or PREVIOUS) using Jython.
servers = AdminTask.listServers('[-serverType APPLICATION_SERVER]').splitlines()
for server in servers:
print server
print AdminConfig.showAttribute(server, "monitoringPolicy")
break
This gave me an exception that the attribute is invalid:
An exception occurred when executing the file "test.py". Information
about the exception: com.ibm.ws.scripting.ScriptingException:
WASX7080E: Invalid attributes for type "Server" -- "monitoringPolicy".
But I could get the attribute using print AdminConfig.showall(server):
...
[monitoringPolicy [[autoRestart true]
[maximumStartupAttempts 3]
[nodeRestartState STOPPED]
[pingInterval 60]
[pingTimeout 300]]]
...
For me it looks like monitoringPolicy is the key of an array, so that it should be possible to get the restart state with
policy = AdminConfig.showAttribute(server, "monitoringPolicy")
restartState = policy["restartState"] # Should be "STOPPED"
Where is the problem?
Edit
After taking a deeper look in the list output, I saw that I missed a top level property processDefinitions, which is the parent of monitoringPolicy.
pd = AdminConfig.showAttribute(server, "processDefinitions")
print pd
This prints:
[(cells/CnxCell/nodes/CnxNode01/servers/UtilCluster_server1|server.xml#JavaProcessDef_1578492353152)]
But I'm not able to get any of the child propertys from this object:
# TypeError: sequence subscript must be integer or slice
print pd["monitoringPolicy"]
# AttributeError: 'string' object has no attribute 'monitoringPolicy'
print pd.monitoringPolicy
MonitoringPolicy has his own type. This prints the server and the state, so 'RUNNING', 'STOPPED'
servers = AdminTask.listServers('[-serverType APPLICATION_SERVER]').splitlines()
for server in servers:
print(server)
mpol = AdminConfig.list("MonitoringPolicy", server)
print(AdminConfig.showAttribute(mpol, 'nodeRestartState'))

Why do I get credential read in error on google slide API?

I downloaded the gsuite developers python code from :
(https://github.com/gsuitedevs/python-samples)
I then enabled api access and downloaded the credentials.json file and ran the quickstart.py in:
(python-samples-master/slides/quickstart) and it worked and outputted
The presentation contains 5 slides:
- Slide #1 contains 4 elements.
- Slide #2 contains 11 elements.
- Slide #3 contains 9 elements.
- Slide #4 contains 5 elements.
- Slide #5 contains 12 elements.
So it worked. Then I tried to run test_snippets.py in:
(python-samples-master/slides/snippets)
And I get an error
======================================================================
ERROR: setUpClass (__main__.SnippetsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "~/anaconda3/lib/python3.7/site-packages/oauth2client/client.py", line 1228, in _implicit_credentials_from_files
credentials_filename)
File "~/anaconda3/lib/python3.7/site-packages/oauth2client/client.py", line 1397, in _get_application_default_credential_from_file
AUTHORIZED_USER + "' or '" + SERVICE_ACCOUNT + "' values)")
oauth2client.client.ApplicationDefaultCredentialsError: 'type' field should be defined (and have one of the 'authorized_user' or 'service_account' values)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test_snippets.py", line 32, in setUpClass
super(SnippetsTest, cls).setUpClass()
File "~/Desktop/python-samples-master/slides/snippets/base_test.py", line 27, in setUpClass
cls.credentials = cls.create_credentials()
File "~/Desktop/python-samples-master/slides/snippets/base_test.py", line 44, in create_credentials
credentials = GoogleCredentials.get_application_default()
File "~/anaconda3/lib/python3.7/site-packages/oauth2client/client.py", line 1271, in get_application_default
return GoogleCredentials._get_implicit_credentials()
File "~/anaconda3/lib/python3.7/site-packages/oauth2client/client.py", line 1256, in _get_implicit_credentials
credentials = checker()
File "~/anaconda3/lib/python3.7/site-packages/oauth2client/client.py", line 1231, in _implicit_credentials_from_files
extra_help, error)
File "~/anaconda3/lib/python3.7/site-packages/oauth2client/client.py", line 1429, in _raise_exception_for_reading_json
credential_file + extra_help + ': ' + str(error))
oauth2client.client.ApplicationDefaultCredentialsError: An error was encountered while reading json file: ~/Documents/credentials/credentials.json (pointed to by GOOGLE_APPLICATION_CREDENTIALS environment variable): 'type' field should be defined (and have one of the 'authorized_user' or 'service_account' values)
I definitely have a GOOGLE_APPLICATION_CREDENTIALS that points to the same credentials that successfully ran quickstart.py.
Is there something else I need or do I need to change some code to load the credentials data in?
It seems like GoogleCredentials.get_application_default() call is the one that is erroring
def create_credentials(cls):
credentials = GoogleCredentials.get_application_default()
scope = [
'https://www.googleapis.com/auth/drive',
]
return credentials.create_scoped(scope)
The file pointed to the environment variable GOOGLE_APPLICATION_CREDENTIALS is not a valid service account json file.
Open your service account json file. The beginning of the file should look similar to this:
{
"type": "service_account",
"project_id": "development-123456",
"private_key_id": "19c38bac6560abcdef01234567ac4da7991cbaad",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANB

How do I setup Airflow's email configuration to send an email on errors?

I'm trying to make an Airflow task intentionally fail and error out by passing in a Bash line (thisshouldnotrun) that doesn't work. Airflow is outputting the following:
[2017-06-15 17:44:17,869] {bash_operator.py:94} INFO - /tmp/airflowtmpLFTMX7/run_bashm2MEsS: line 7: thisshouldnotrun: command not found
[2017-06-15 17:44:17,869] {bash_operator.py:97} INFO - Command exited with return code 127
[2017-06-15 17:44:17,869] {models.py:1417} ERROR - Bash command failed
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/operators/bash_operator.py", line 100, in execute
raise AirflowException("Bash command failed")
AirflowException: Bash command failed
[2017-06-15 17:44:17,871] {models.py:1433} INFO - Marking task as UP_FOR_RETRY
[2017-06-15 17:44:17,878] {models.py:1462} ERROR - Bash command failed
Traceback (most recent call last):
File "/home/ubuntu/.local/bin/airflow", line 28, in <module>
args.func(args)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/bin/cli.py", line 585, in test
ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/operators/bash_operator.py", line 100, in execute
raise AirflowException("Bash command failed")
airflow.exceptions.AirflowException: Bash command failed
Will Airflow send an email for these kind of errors? If not, what would be the best way to send an email for these errors?
I'm not even sure if airflow.cfg is setup properly... Since the ultimate goal is to test the email alerting notification, I want to make sure airflow.cfg is setup properly. Here's the setup:
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = emailsmtpserver.region.amazonaws.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
# smtp_user = airflow_data_user
# smtp_password = password
smtp_port = 587
smtp_mail_from = airflow_data_user#domain.com
What is smtp_starttls? I can't find any info for it in the documentation or online. If we have 2-factor authentication needed to view emails, will that be an issue here for Airflow?
Here's my Bash command:
task1_bash_command = """
export PATH=/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
export rundate=`TZ='America/Los_Angeles' date +%F -d "yesterday"`
export AWS_CONFIG_FILE="/home/ubuntu/.aws/config"
/home/ubuntu/bin/snowsql -f //home/ubuntu/sql/script.sql 1> /home/ubuntu/logs/"$rundate"_dev.log 2> /home/ubuntu/logs/"$rundate"_error_dev.log
if [ -e /home/ubuntu/logs/"$rundate"_error_dev.log ]
then
exit 64
fi
And my task:
task1 = BashOperator(
task_id = 'run_bash',
bash_command = task1_bash_command,
dag = dag,
retries = 2,
email_on_failure = True,
email = 'username#domain.com')
smtp_starttls basically means Use TLS
Set this to False and set smtp_ssl to True if you want to use SSL instead. You probably need smtp_user and smtp_password for either.
Airflow will not handle 2 step authentication. However, is you are using AWS you likely don't need it as your SMTP (SES) credentials are different from your AWS credentials.
See here.
EDIT:
For airflow to send an email on failure, there are a couple things that need to be set on your task, email_on_failure and email.
See here for example:
def throw_error(**context):
raise ValueError('Intentionally throwing an error to send an email.')
t1 = PythonOperator(task_id='throw_error_and_email',
python_callable=throw_error,
provide_context=True,
email_on_failure=True,
email='your.email#whatever.com',
dag=dag)
Use below link for creating airflow dag.
How to trigger daily DAG run at midnight local time instead of midnight UTC time
Approach 1 :
You can setup SMTP locally and make it send email on jobs failure.
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
smtp_host = localhost
smtp_starttls = False
smtp_ssl = False
smtp_port = 25
smtp_mail_from = noreply#company.com
Approach 2 : You can use Gmail to send email.
I have written an article to do this.
https://helptechcommunity.wordpress.com/2020/04/04/airflow-email-configuration/
If we have 2-factor authentication needed to view emails, will that be an issue here for Airflow?
You can use google app password to get your way around 2 factor authentication
https://support.google.com/mail/answer/185833?hl=en-GB
Source - https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-env-variables.html

WASX7017E: Jython exception File "<string>"

When i try to execute my Jython script i get this error:
WASX7017E: Exception received while running file "/opt/test_wsadmin_configura_datasource.jython"; exception information: com.ibm.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 29, in ?
NameError: dbuser
This is my test_wsadmin_configura_datasource.jython
##
# src: https://www.ibm.com/developerworks/community/blogs/timdp/entry/automating_application_installation_and_configuration_into_websphere_application_server46?lang=en
#
# eseguire con wsadmin.sh -lang jython
# get an environment variable
import os;
#installRoot = os.environ["INSTALL_ROOT"]
# useful variables
cell = AdminControl.getCell()
node = AdminControl.getNode()
server = AdminControl.getConfigId(AdminControl.queryNames("node="+node+",type=Server,*"))
varmap = AdminConfig.list('VariableMap', server)
appman = AdminControl.queryNames("type=ApplicationManager,*")
def createJ2CAuthAlias(alias,description,user,password):
sec = AdminConfig.getid('/Cell:'+ cell +'/Security:/')
alias_attr = ["alias", alias]
desc_attr = ["description", description]
userid_attr = ["userId", user]
password_attr = ["password", password]
attrs = [alias_attr, desc_attr, userid_attr, password_attr]
authdata = AdminConfig.create('JAASAuthData', sec, attrs)
print "J2C Auth Alias created ---> " + alias
AdminConfig.save()
return
createJ2CAuthAlias(dbuser,description,DBUSER,PASS)
WebSphere is running inside original docker image ibmcom/websphere-traditional:8.5.5.11-install
How can i solve?
EDIT1: Here found that the issue can be related to a non-UTF8 character.
These errors can occur because there are UTF-8 characters in the file that are not valid.
...
An easy way to determine if a character that is not valid is causing the error is to enter export LANG=C and run the script again.
export LANG=C does not change the result.
Just found that double quoting arguments does the job:
createJ2CAuthAlias("dbuser","description","DBUSER","PASS")

IBM Websphere - wsadmin script for AutoDeployment

Can you please look into the below issue?
import time
node = AdminConfig.getid('/Node:node111/')
print node
print "sss" +AdminControl.queryNames('WebSphere:type=Server,*')
cell = AdminControl.getCell()
print " Cell name is --> "+ cell
warLoc='/home/test/PA_Test.war'
appName='PA_Test'
cellName=AdminControl.getCell()
print cellName
nodeName=AdminControl.getNode()
print "hello"
print " nodeName is --> "+ nodeName
appManager=AdminControl.queryNames('cell='+cellName+',node=node111,type=ApplicationManager,process=WebSphere_Portal,*')
print appManager
application = AdminConfig.getid("/Deployment:"+appName+"/")
print 'printing application name in next line'
print application
len(application)
print application
len(application)
var1 = len(application)
if var1:
print "Application exists"
print "before uninstall"
AdminApp.uninstall('PA_Test')
print "after uninstall"
AdminConfig.save()
else:
print "Application doesnot exist"
print "Before install"
print AdminApp.install(warLoc,'[-target default -usedefaultbindings -defaultbinding.virtual.host default_host]')
print "Done from My Side"
print "After install"
AdminConfig.save()
time.sleep(30)
AdminControl.invoke(appManager , 'startApplication',appName)
print "The script is completed."
Below is the successful message:
ADMA5016I: Installation of PA_Test.war154ed2178ed started.
ADMA5058I: Application and module versions are validated with versions of deployment targets.
ADMA5005I: The application PA_Test.war154ed2178ed is configured in the WebSphere Application Server repository.
ADMA5005I: The application PA_Test.war154ed2178ed is configured in the WebSphere Application Server repository.
ADMA5081I: The bootstrap address for client module is configured in the WebSphere Application Server repository.
ADMA5053I: The library references for the installed optional package are created.
ADMA5001I: The application binaries are saved in /opt/IBM/WebSphere/wp_profile/wstemp/Script154ed215c59/workspace/cells/inpudingpwmtst1Cell/applications/PA_Test.war154ed2178ed.ear/PA_Test.war154ed2178ed.ear
Now I am able to deploy the war but the file name is getting change to PA_Test.war154ed2178ed.ear but its should actually be PA_Test.ear. Can you please help how to change the current script?
Short answer:
print AdminApp.install(warLoc, [
'-appname', 'PA_Test.ear',
'-target', 'default',
'-usedefaultbindings',
'-defaultbinding.virtual.host', 'default_host'
])
Even shorter one: use WDR.
You'll need an application manifest file PA_Test.wdra:
PA_Test.ear pa.war
target default
usedefaultbindings
defaultbinding.virtual.host default_host
... and Jython script to import this manifest:
alreadyInstalled = 'PA_Test.ear' in AdminApp.list().splitlines()
importApplicationManifest('PA_Test.wdra')
save()
sync()
while AdminApp.isAppReady('PA_Test.ear') != 'true':
time.sleep(10)
if not alreadyInstalled:
for appMgr in queryMBeans(type='ApplicationManager', process='WebSphere_Portal'):
appMgr.startApplication('PA_Test.ear')
Disclosure: I'm a contributor and maintainer of WDR

Resources