Is it possible to stop/start WAS appserver using wsadmin (jacl/jython). I want to detele all caches on profile and then restart WAS appserver. I'm using wsadmin as standalone.
From wsadmin you may issue a command (using Jython):
AdminControl.invoke(AdminControl.queryNames('WebSphere:*,type=Server,node=%s,process=%s' % ('YourNodeName', 'YourServerName')), 'restart')
works with WAS Base & ND.
With ND you have another option:
AdminControl.invoke(AdminControl.queryNames('WebSphere:*,type=Server,node=%s,process=%s' % ('YourNodeName', 'YourServerName')), 'stop')
# now your server is stopped, you can do any cleanup
# and then start the server with NodeAgent
AdminControl.invoke(AdminControl.queryNames('WebSphere:*,type=NodeAgent,node=%s' % 'YourNodeName'), 'launchProcess', ['YourServerName'], ['java.lang.String'])
Check out the wsadminlib script. It has over 500 methods written for you to perform specific wsadmin tasks. Also check out related wsadminlib blog - you'll definitely want to view the powerpoint on this site to get an overview of usage.
You don't specify which caches you would like to clear. If you want to clear dynacache, wsadminlib offers clearDynaCache, clearAllProxyCaches, and others as well as server restart methods.
Example usage:
import sys
execfile('/opt/software/portalsoftware/wsadminlib/wsadminlib.py')
clearAllProxyCaches()
for (nodename,servername) in listAllAppServers():
clearDynaCache( nodename, servername, dynacachename )
save()
maxwaitseconds=300
restartServer( nodename, servername, maxwaitseconds)
Related
I have written a Terraform template that creates an Azure Windows VM. I need to configure the VM to Enable PowerShell Remoting for the release pipeline to be able to execute Powershell scripts. After the VM is created I can RDP to the VM and do everything I need to do to enable Powershell remoting, however, it would be ideal if I could script all of that so it could be executed in a Release pipeline. There are two things that prevent that.
The first, and the topic of this question is, that I have to run "WinRM quickconfig". I have the template working such that when I do RDP to the VM, after creation, that when I run "WinRM quickconfig" I receive the following responses:
WinRM service is already running on this machine.
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:
Configure LocalAccountTokenFilterPolicy to grant administrative rights remotely to local users.
Make these changes [y/n]?
I want to configure the VM in Terraform so LocalAccountTokenFilterPolicy is set and it becomes unnecessary to RDP to the VM to run "WinRM quickconfig". After some research it appeared I might be able to do that using the resource azure_virtual_machine_extension. I add this to my template:
resource "azurerm_virtual_machine_extension" "vmx" {
name = "hostname"
location = "${var.location}"
resource_group_name = "${var.vm-resource-group-name}"
virtual_machine_name = "${azurerm_virtual_machine.vm.name}"
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
# "commandToExecute": "powershell Set-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System' -Name 'LocalAccountTokenFilterPolicy' -Value 1 -Force"
}
SETTINGS
}
When I apply this, I get the error:
Error: compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="This operation cannot be performed when extension operations are disallowed. To allow, please ensure VM Agent is installed on the VM and the osProfile.allowExtensionOperations property is true."
I couldn't find any Terraform documentation that addresses how to set the allowExtensionOperations property to true. On a whim, I tried adding the property "allow_extension_operations" to the os_profile block in the azurerm_virtual_machine resource but it is rejected as an invalid property. I also tried adding it to the os_profile_windows_config block and isn't valid there either.
I found a statement on Microsoft's documentation regarding the osProfile.allowExtensionOperations property that says:
"This may only be set to False when no extensions are present on the virtual machine."
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.compute.models.osprofile.allowextensionoperations?view=azure-dotnet
This implies to me that the property is True by default but it doesn't actually say that and it certainly isn't acting like that. Is there a way in Terraform to set osProfile.alowExtensionOperations to true?
Running into the same issue adding extensions using Terraform, i created a Windows 2016 custom image,
provider "azurerm" version ="2.0.0"
Terraform 0.12.24
Terraform apply error:
compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=0
-- Original Error: autorest/azure: Service returned an error.
Status=<nil>
Code="OperationNotAllowed"
Message="This operation cannot be performed when extension operations are disallowed. To allow, please ensure VM Agent is installed on the VM and the osProfile.allowExtensionOperations property is true."
I ran into same error, possible solution depends on 2 things here.
You have to pass provider "azurerm" version ="2.5.0 and you have to pass os_profile_windows_config (see below) parameter in virtual machine resource as well. So, that terraform will consider the extensions that your are passing. This fixed my errors.
os_profile_windows_config {
provision_vm_agent = true
}
I am using RStudio to connect to my HDFS file using SparkR. When I leave Spark analyses running overnight, I get "R session aborted" error the next day. From Spark's documentation on SparkR (https://spark.apache.org/docs/latest/configuration.html), the default value of spark.r.backendConnectionTimeout is set to 6000s. I would like to change this value to something large that my connection doesn't time out after the analyses is done.
I have tried the following:
sparkR.session(master = "local[*]", sparkConfig = list(spark.r.backendConnectionTimeout = 10))
sparkR.session(master = "local[*]", spark.r.backendConnectionTimeout = 10)
I get the same output for both commands:
Spark package found in SPARK_HOME: C:\Spark\spark-2.3.2-bin-hadoop2.7
Launching java with spark-submit command C:\Spark\spark-2.3.2-bin-hadoop2.7/bin/spark-submit2.cmd sparkr-shell C:\Users\XYZ\AppData\Local\Temp\3\RtmpiEaE5q\backend_port696c18316c61
Java ref type org.apache.spark.sql.SparkSession id 1
It seems that the parameter was not passed correctly. Also, I am not sure where to pass that parameter.
Any help would be appreciated.
A similar post is around, but that involves Zeppelin (how to change spark.r.backendConnectionTimeout value?).
Thanks.
I found the solution: it is to modify the spark-defaults.conf file and add the following line:
spark.r.backendConnectionTimeout = 6000000
(or whatever time limit you want)
IMPORTANT note - restart hadoop and yarn services, and try connecting to Spark with SparkR normally:
library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib")))
sparkR.session(master = "local")
You can check if the settings took place or not at http://localhost:4040/environment/
I hope this comes useful for other people.
In my service configuration TimeoutStartSec == 100s.
According to man page.. my Application need to notify to systemD sd_notify(READY=1) during <100s. If not service is put into failed state.
https://www.freedesktop.org/software/systemd/man/systemd.service.html
But in case of i want to do something ( eg just print out some log said : startup is not done in time ) . before my service is actually set to failed state .
Is there any change to do that...
My idea is create a timer which have same value with TimeoutStartSec == xx s
then i can manage to do something before timer expired.
But the question is TimeoutStartSec == xx is dynamicaly configured by user - in my project..
So i would expect some Dbus interface which will offer to read TimeoutStartSec from my application...
I checked
https://www.freedesktop.org/wiki/Software/systemd/dbus/
but did not found a corresponding property.
I am using systemD on Linux which freely use systemD Dbus interfaces.
I found solution .
SystemD actually provide that info
dbus-send --system --dest=org.freedesktop.systemd1 --print-reply /org/freedesktop/systemd1/unit/ServiceName_2eservice \
org.freedesktop.DBus.Properties.Get string:org.freedesktop.systemd1.Service string:TimeoutStartUSec
Note: your name of service need to modify to get exactly object path ServiceName.service adapt to ServiceName_2eservice
When I am using the websphere console, and navigate to the Secure Administration -> SSO I have a checkbox called: 'Require SSL'. How do I enable/disable this using jacl/jython ?
I have even used the command assistance from the console. But when I checked the logs, I can see almost every other command being issues apart from this setting.
Using Jython:
AdminTask.configureSingleSignon('-requiresSSL true')
Other available options for the configureSingleSignon command:
-enable [true|false]
-domainName [String]
-interoperable [true|false]
-attributePropagation [true|false]
Reference: SecurityConfigurationCommands command group for the AdminTask object.
When running a test which makes use of the of the Jmeter-Plugins listener Response Times vs Threads or Active Threads Over Time remote running of the test plan produces a results file which contains missing results used to plot the actual graph, however when run locally all results are returned. E.g. when using the Response Times vs Threads:
Example of a local result:
1383659591841,59,Example 1,200,OK,Example 1 1-579,text,true,183,22,22,59
Example of a remote result:
1383659859149,43,Example 1,200,OK,Example 1 1-575,text,true,183,43
Note the last two fields are missing
I would check the script definition of the two server: maybe some configuration for the "Write results to file" controller has been changed.
Take the local jmx service and copy it to the remote server.
Also, look for differences in the "# Results file configuration" section of jmeter.properties file.
Make sure that on all of the slave/remote servers the jmeter.properties file within $JMETER_HOME/bin has the following setting
jmeter.save.saveservice.thread_counts=true
By default this is set to false (and commented out)
For more informtation:
JMeter Plugins Installation