jython: port definition - websphere

I am looking for a jython script that does the following:
Servers > application servers > server1 > Ports > WC_default > (set) port=8080.
enviornment > virtaul hosts > deault_host > host aliases > [if there is an entry with host name==*, then port = 8080]
Thank you very much.

Use the following code as a starting point:
serverEntries = AdminConfig.list('ServerEntry', AdminConfig.getid('/Node:' + nodeName + '/')).split(java.lang.System.getProperty('line.separator'))
for serverEntry in serverEntries:
if AdminConfig.showAttribute(serverEntry, "serverName") == 'server1':
sepString = AdminConfig.showAttribute(serverEntry, "specialEndpoints")
sepList = sepString[1:len(sepString)-1].split(" ")
for specialEndPoint in sepList:
endPointNm = AdminConfig.showAttribute(specialEndPoint, "endPointName")
if endPointNm == "WC_defaulthost":
ePoint = AdminConfig.showAttribute(specialEndPoint, "endPoint")
# at this point you probably want to do a resetAttribute instead of showAttribute
defaultHostPort = AdminConfig.showAttribute(ePoint, "port")
break
for hostAlias in AdminConfig.getid('/Cell:' + cellName + '/VirtualHost:default_host/HostAlias:/').split(java.lang.System.getProperty('line.separator')):
if AdminConfig.showAttribute(hostAlias, 'port') == defaultHostPort:
print "Deleting host alias for port " + defaultHostPort
AdminConfig.remove(hostAlias)
AdminConfig.create('HostAlias', AdminConfig.getid('/Cell:' + cellName + '/VirtualHost:default_host/'), [['hostname', '*'],['port', defaultHostPort]])

Related

Generate Azure Storage SAS Signature In Ruby

I am trying to use the following code to generate a valid URL for accessing a blob in my Azure storage account. The Azure account name and key are stored in .env files. For some reason, the URL doesn't work; I get a Signature did not match error.
# version 2018-11-09 and later, https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas#version-2018-11-09-and-later
signed_permissions = "r"
signed_start = "#{(start_time - 5.minutes).iso8601}"
signed_expiry = "#{(start_time + 10.minutes).iso8601}"
canonicalized_resource = "/blob/#{Config.azure_storage_account_name}/media/#{medium.tinyurl}"
signed_identifier = ""
signed_ip = ""
signed_protocol = "https"
signed_version = "2018-11-09"
signed_resource = "b"
signed_snapshottime = ""
rscc = ""
rscd = ""
rsce = ""
rscl = ""
rsct = ""
string_to_sign = signed_permissions + "\n" +
signed_start + "\n" +
signed_expiry + "\n" +
canonicalized_resource + "\n" +
signed_identifier + "\n" +
signed_ip + "\n" +
signed_protocol + "\n" +
signed_version + "\n" +
signed_resource + "\n" +
signed_snapshottime + "\n" +
rscc + "\n" +
rscd + "\n" +
rsce + "\n" +
rscl + "\n" +
rsct
sig = OpenSSL::HMAC.digest('sha256', Base64.strict_decode64(Config.azure_storage_account_key), string_to_sign.encode(Encoding::UTF_8))
sig = Base64.strict_encode64(sig)
#result = "#{medium.storageurl}?sp=#{signed_permissions}&st=#{signed_start}&se=#{signed_expiry}&spr=#{signed_protocol}&sv=#{signed_version}&sr=#{signed_resource}&sig=#{sig}"
PS: This is in Rails and medium is a record pulled from the DB that contains information about the blob in Azure.
Turns out the issue was clock skew. The signed_start and signed_expiry amounts I was using were too tight. WHen I relaxed then to -30/+20, I could reliably create SAS tokens using the snipper I posted.

Genexus: Full process for FTP connection and exchanging data

What is the whole process in order to connect to FTP server and exchange data using Genexus?
You can use SFTP Module.
Documentation
https://wiki.genexus.com/commwiki/servlet/wiki?45274,GeneXus+FTPS+Module
Here is some code:
//Knowledge Manager / Manage Module References / GenexusFTPS Module
//GX16U8 Module SFTP, SecurityAPICommons //&SftpOptions //SDT SftpOtptions
&SftpOptions.Host = !"172.16.4.5"
&SftpOptions.User = !"dummyuser"
&SftpOptions.Port = 22
&SftpOptions.Password = !"dumypass"
&SftpOptions.AllowHostKeyChecking = true
&SftpOptions.KeyPath = !"C:\Temp\keys\private_key.pem"
&SftpOptions.KeyPassword= !"dummykeypass"
&SftpOptions.KnownHostsPath = !"C:\Temp\known_hosts"
If &SftpClient.Connect( &SftpOptions)
If &SftpClient.Put( !"C:\temp\testfile.txt", "/sftptest")
Else
If &SftpClient.HasError()
msg( !"Error. Code: " + &SftpClient.GetErrorCode() + !"Description: " + &SftpClient.GetErrorDescription())
Endif
Endif
Else
If &SftpClient.HasError()
msg( !"Error. Code: " + &SftpClient.GetErrorCode() + !"Description: " + &SftpClient.GetErrorDescription())
Endif
Endif
&SftpClient.Disconnect()

Reading from check boxes doesn't work

I want to iterate through check boxes to get the caption text from each one. I have this code but it is not working. Could someone tell me whats wrong?
Is that because later in the For loop I am using $i to iterate through other things? But it doesn't even run the Send() command. Does AutoIt increment the $i variable automatically?
For $i = 1 to 64
If GUICtrlRead("$Checkbox" & $i,0) = $GUI_CHECKED Then
Local $checkboxtext = GUICtrlRead($Checkbox[$i], 1)
Local $checkboxtextsplit = StringSplit( $checkboxtext, "/")
$instanz = $checkboxtextsplit[1]
$favorite = "F" & $checkboxtextsplit[2]
$position = $checkboxtextsplit[3]
;Select actual Instance from Checkbox Name.
If $instanz = "1" Then
WinActivate($handle1)
Else
WinActivate($handle2)
EndIf
Send("{" & $favorite & "}")
;...
EndIf
Next
I was providing GUICtrlRead() its parameters the wrong way. Instead of:
If GUICtrlRead("$Checkbox" & $i, 0) = $GUI_CHECKED Then
Local $checkboxtext = GUICtrlRead($Checkbox[$i], 1)
It should be:
If GUICtrlRead($Checkbox & $i, 0) = $GUI_CHECKED Then
Local $checkboxtext = GUICtrlRead($Checkbox & $i, 1)
To retrieve a Checkbox checked/un-checked state use:
If GUICtrlRead($Checkbox & $i, 0) = $GUI_CHECKED Then ...
To read text of a Checkbox use:
$checkboxtext = GUICtrlRead($Checkbox & $i, 1)

Running a mapreduce job on cloudera demo cdh3u4 (airline data example)

I'm doing the R-Hadoop tutorial (october 2012) of Jeffrey Breen.
At the moment I try to populate hdfs and then run the commands Jeffrey published in his tutorial in RStudio. Unfortunately I got some troubles with it:
UPDATE: I now moved the data folder to:
/home/cloudera/data/hadoop/wordcount (and same for airline-Data)
No when I run populate.hdfs.sh I get the following output:
[cloudera#localhost ~]$ /home/cloudera/TutorialBreen/bin/populate.hdfs.sh
mkdir: cannot create directory /user/cloudera: File exists
mkdir: cannot create directory /user/cloudera/wordcount: File exists
mkdir: cannot create directory /user/cloudera/wordcount/data: File exists
mkdir: cannot create directory /user/cloudera/airline: File exists
mkdir: cannot create directory /user/cloudera/airline/data: File exists
put: Target /user/cloudera/airline/data/20040325.csv already exists
And then I tried the commands in RStudio as shown in the tutorial but I get errors at the end. Can someone show me what I did wrong?
> if (LOCAL)
+ {
+ rmr.options.set(backend = 'local')
+ hdfs.data.root = 'data/local/airline'
+ hdfs.data = file.path(hdfs.data.root, '20040325-jfk-lax.csv')
+ hdfs.out.root = 'out/airline'
+ hdfs.out = file.path(hdfs.out.root, 'out')
+ if (!file.exists(hdfs.out))
+ dir.create(hdfs.out.root, recursive=T)
+ } else {
+ rmr.options.set(backend = 'hadoop')
+ hdfs.data.root = 'airline'
+ hdfs.data = file.path(hdfs.data.root, 'data')
+ hdfs.out.root = hdfs.data.root
+ hdfs.out = file.path(hdfs.out.root, 'out')
+ }
> asa.csvtextinputformat = make.input.format( format = function(con, nrecs) {
+ line = readLines(con, nrecs)
+ values = unlist( strsplit(line, "\\,") )
+ if (!is.null(values)) {
+ names(values) = c('Year','Month','DayofMonth','DayOfWeek','DepTime','CRSDepTime',
+ 'ArrTime','CRSArrTime','UniqueCarrier','FlightNum','TailNum',
+ 'ActualElapsedTime','CRSElapsedTime','AirTime','ArrDelay',
+ 'DepDelay','Origin','Dest','Distance','TaxiIn','TaxiOut',
+ 'Cancelled','CancellationCode','Diverted','CarrierDelay',
+ 'WeatherDelay','NASDelay','SecurityDelay','LateAircraftDelay')
+ return( keyval(NULL, values) )
+ }
+ }, mode='text' )
> mapper.year.market.enroute_time = function(key, val) {
+ if ( !identical(as.character(val['Year']), 'Year')
+ & identical(as.numeric(val['Cancelled']), 0)
+ & identical(as.numeric(val['Diverted']), 0) ) {
+ if (val['Origin'] < val['Dest'])
+ market = paste(val['Origin'], val['Dest'], sep='-')
+ else
+ market = paste(val['Dest'], val['Origin'], sep='-')
+ output.key = c(val['Year'], market)
+ output.val = c(val['CRSElapsedTime'], val['ActualElapsedTime'], val['AirTime'])
+ return( keyval(output.key, output.val) )
+ }
+ }
> reducer.year.market.enroute_time = function(key, val.list) {
+ if ( require(plyr) )
+ val.df = ldply(val.list, as.numeric)
+ else { # this is as close as my deficient *apply skills can come w/o plyr
+ val.list = lapply(val.list, as.numeric)
+ val.df = data.frame( do.call(rbind, val.list) )
+ }
+ colnames(val.df) = c('crs', 'actual','air')
+ output.key = key
+ output.val = c( nrow(val.df), mean(val.df$crs, na.rm=T),
+ mean(val.df$actual, na.rm=T),
+ mean(val.df$air, na.rm=T) )
+ return( keyval(output.key, output.val) )
+ }
> mr.year.market.enroute_time = function (input, output) {
+ mapreduce(input = input,
+ output = output,
+ input.format = asa.csvtextinputformat,
+ output.format='csv', # note to self: 'csv' for data, 'text' for bug
+ map = mapper.year.market.enroute_time,
+ reduce = reducer.year.market.enroute_time,
+ backend.parameters = list(
+ hadoop = list(D = "mapred.reduce.tasks=2")
+ ),
+ verbose=T)
+ }
> out = mr.year.market.enroute_time(hdfs.data, hdfs.out)
Error in file(f, if (format$mode == "text") "r" else "rb") :
cannot open the connection
In addition: Warning message:
In file(f, if (format$mode == "text") "r" else "rb") :
cannot open file 'data/local/airline/20040325-jfk-lax.csv': No such file or directory
> if (LOCAL)
+ {
+ results.df = as.data.frame( from.dfs(out, structured=T) )
+ colnames(results.df) = c('year', 'market', 'flights', 'scheduled', 'actual', 'in.air')
+ print(head(results.df))
+ }
Error in to.dfs.path(input) : object 'out' not found
Thank you so much!
First of all, it looks like the command:
/usr/bin/hadoop fs -mkdir /user/cloudera/wordcount/data
Is being split into multiple lines. Make sure you're entering it as-is.
Also, it is saying that the local directory data/hadoop/wordcount does not exist. Verify that you're running this command from the correct directory and that your local data is where you expect it to be.

get partial running status of application using jython

Hi I need to know if the application is running partially. using the following command I am able to get info if the application is running.
serverstatus = AdminControl.completeObjectName('type=Application,name='+n1+',*')
print serverstatus
Is there any other was to check if the current status of application is partially running.??
Regards
Snehan Solomon
In order to accurately determine whether the application is partially started/stopped, you must first determine the deployment targets against which the application is deployed, and then determine whether or not the application is running on that server:
def isApplicationRunning(applicationName, serverName, nodeName) :
return AdminControl.completeObjectName("type=Application,name=%s,process=%s,node=%s,*" % (applicationName, serverName, nodeName)) != ""
def printApplicationStatus(applicationName) :
servers = started = 0
targets = AdminApplication.getAppDeploymentTarget(applicationName)
for target in targets :
type = AdminConfig.getObjectType(target)
if (type == "ClusteredTarget") :
clusterName = AdminConfig.showAttribute(target, "name")
members = AdminUtilities.convertToList(AdminConfig.getid("/ServerCluster:%s/ClusterMember:/" % clusterName))
for member in members :
serverName = AdminConfig.showAttribute(target, "name")
nodeName = AdminConfig.showAttribute(member, "nodeName")
started += isApplicationRunning(applicationName, serverName, nodeName)
servers += 1
elif (type == "ServerTarget") :
serverName = AdminConfig.showAttribute(target, "name")
nodeName = AdminConfig.showAttribute(target, "nodeName")
started += isApplicationRunning(applicationName, serverName, nodeName)
servers += 1
if (started == 0) :
print "The application [%s] is NOT RUNNING." % applicationName
elif (started != servers) :
print "The application [%s] is PARTIALLY RUNNING." % applicationName
else :
print "The application [%s] is RUNNING." % applicationName
if (__name__ == "__main__"):
printApplicationStatus(sys.argv[0]);
Note that the AdminApplication script library only exists for WAS 7+, so if you are running an older version, you will need to obtain the deployment targets yourself.
I was able to get the partial status of application based on the number of nodes. I just hardcoded the number of nodes and then compared them against the number of MBeans they returned.
import sys
appName = sys.argv[0]
appCount=0
nodeCount=2
appMBeans = AdminControl.queryNames('type=Application,name='+appName+',*').split("\n")
for mbean in appMBeans:
if mbean != "":
appCount=appCount+1
print "Count of Applications is %s" %(appCount)
if appCount == 0:
print "----!!!ALERT!!!!---- The Application "+appName+" is Not Running"
elif appCount > 0 and appCount < nodeCount:
print "----!!!ALERT!!!!---- The Application "+appName+" is Partially Running"
elif appCount == nodeCount:
print "The Application "+appName+" is Running"

Resources