Change JVM logs path use jython - websphere

I would like to change path JVM logs in WebSphere 8.5.5.
Via GUI in: Logging and tracing > NorkomServer > JVM Logs.
There is:
Information required File Name:
${SERVER_LOG_ROOT}/SystemOut.log
I have to change it to: "/opt/logs/SystemOut.log"
Instead use "sed", i must use jython. Any help, please ?

You can run the below wsadmin script (with -lang jython) to update the default file path for SystemOut.log file.
#Update the below Environment variables before running this script
#Enter the nodename below
nodename=''
#Enter the servername below
servername=''
#Set the updated file path below
filename='/opt/logs/SystemOut.log'
#Set the Stream name
#outputStreamRedirect for SystemOut & errorStreamRedirect for SystemError
streamName = 'outputStreamRedirect'
#get the server Id
serverId = AdminConfig.getid('/Cell:/Node:'+nodename+'/Server:'+servername+'')
#get the Stream Id
streamId = AdminConfig.showAttribute(serverId, streamName)
#update the file path
AdminConfig.modify(streamId, [['fileName', filename]])
#Save the changes
AdminConfig.save()
Restart JVM post this change.

Related

How do I get the specific value from configuration in IIS 8.0 (and above) using appcmd?

I am using IIS 8.5 and I need a getter for a specific property from configuration.
For example, in order to set connectionTimeout property I am using the following syntax :
appcmd.exe set config -section:system.applicationHost/sites /siteDefaults.limits.connectionTimeout:"00:04:00" /commit:apphost
but when I am trying to read the proprety by the following command:
appcmd.exe list config -section:system.applicationHost/sites /siteDefaults.limits.connectionTimeout
I get the following error:
ERROR ( message:The attribute "siteDefaults.limits.connectionTimeout"
is not supported in the current command usage. )
and from what I tried so far it seems that list config command can give me only the section level and not further.
Is there any other way to get a specific property using appcmd?
If Powershell is an acceptable alternative for you, then you can use the WebAdministration Powersell module to get the timeout. Here is a simple script :
#import WebAdministration to access IIS configuration
Import-Module WebAdministration
#declare variables containing your filters
$pspath = "MACHINE/WEBROOT/APPHOST"
$filter = "system.applicationHost/sites/siteDefaults/limits"
$name = "connectionTimeout"
#Read timeout from IIS
$res = Get-WebConfigurationProperty -name $name -filter $filter -pspath $pspath
#Format & write output
Write-Host "Timeout (in seconds): " $res.Value.TotalSeconds -ForegroundColor Yellow
You probably need to run it as admin.
Here is the output:
You can get this using /text:
appcmd.exe list config -section:system.applicationHost/sites /text:siteDefaults.limits.connectionTimeout
if you want to get all possible options for /text: use the following code:
appcmd.exe list config -section:system.applicationHost/sites /text:?

Rsyslog imfile error: no file name given

I am using rsyslog version 8.16.0 on ubuntu 16.04.
Following is my configuration file :
module(load="imfile") #needs to be done just once
# File 1
input(type="imfile"
mode="inotify"
File="/var/log/application/hello.log"
Tag="application-access-logs"
Severity="info"
PersistStateInterval="20000"
)
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
$InputRunFileMonitor
#Template for application access events
$template ApplicationLogs,"%HOSTNAME% %msg%\n"
if $programname == 'application-access-logs' then ##xx.xx.xx.xx:12345;ApplicationLogs
if $programname == 'application-access-logs' then ~
I am getting the following error :
rsyslogd: version 8.16.0, config validation run (level 1), master config /etc/rsyslog.conf
rsyslogd: error during parsing file /etc/rsyslog.d/21-application-test.conf, on or before line 10: parameter 'mode' not known -- typo in config file? [v8.16.0 try http://www.rsyslog.com/e/2207 ]
rsyslogd: imfile error: no file name given, file monitor can not be created [v8.16.0 try http://www.rsyslog.com/e/2046 ]
What I am doing wrong here ?
I am using inotify mode because I want to use wildcards in file name.
I know this is not a complete answer but an observation that can help others struggling with the same issue.
module(load="imfile" mode="inotify") is set globally so if any other .conf file has module property set it seems to throw this error for any future files.

Error 'Packet for query is too large' when I tried to make a query on my website

Again I need your help.
I'm trying to put my java web site online.
What I use :
MySQL server : command line mysql -V, result : mysql Ver 15.1 Distrib 10.1.23-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
Cayenne
Debian server
Java (Vaadin)
Packet for query is too large (4739923 > 1048576). You can change this
value on the server by setting the max_allowed_packet' variable.
What I tried :
1. Like the error said, I tried to change the value on the server by doing :
Log on my server
Connect to MySQL with : mysql -u root
Enter : SET GLOBAL max_allowed_packet=1073741824;
then, restart the server with : /etc/init.d/mysql restart
But I still have the error.
2. I took a look to : How to change max_allowed_packet size
But, When I did the nano /etc/mysql/my.cnf, the file looks like (I don't have any [mysql]) :
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Nov 10 23:57:02 2017 from 82.236.220.195
root#XXXX:~# nano /etc/mysql/my.cnf
GNU nano 2.7.4 File: /etc/mysql/my.cnf Modified
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]
# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,
# 2. "/etc/mysql/conf.d/*.cnf" to set global options.
# 3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]
# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/
In mysql, the folders/files in the 'mysql' folder is :
Any hint will be very appreciate!
Thanks
EDIT: In /etc/mysql/mariadb.conf.d/50-server.cnf, I changed :
max_allowed_packet = 1073741824
max_connections = 100000
and I added : net_buffer_length = 1048576
For info :
In my workbench, I can see the server variables :
EDIT2 : Now, when I select the variable in command line on the server, I have :
MariaDB [(none)]> SELECT ##global.max_allowed_packet;
+-----------------------------+
| ##global.max_allowed_packet |
+-----------------------------+
| 1073741824 |
+-----------------------------+
1 row in set (0.00 sec)
SOLUTION Because the error was not explicit.
Thanks to com.mysql.jdbc.PacketTooBigException
My cayenne configuration was :
<url value="jdbc:mysql://IPADDRESS:22/DBBASENAME" />
<login userName="ServerUserName" password="ServerPassword" />
But it should be :
<url value="jdbc:mysql://IPADDRESS/DBBASENAME" />
<login userName="MYSQLUserName" password="MYSQLPassword" />
Change it in my.cnf, then restart mysqld.
Better yet, put it in a file under /etc/mysql/mariadb.conf.d/, and specify the section:
[mysqld]
max_allowed_packet = 1073741824
What you did (SET) went away when you restarted. Even so, it only applied to connections that logged in after doing the SET.

How to load data from local machine to hdfs using flume

i am new to flume so please tell me...how to store log files from my local machine to local my HDFS using flume
i have issues in setting classpath and flume.conf file
Thank you,
ajay
agent.sources = weblog
agent.channels = memoryChannel
agent.sinks = mycluster
## Sources #########################################################
agent.sources.weblog.type = exec
agent.sources.weblog.command = tail -F REPLACE-WITH-PATH2-your.log-FILE
agent.sources.weblog.batchSize = 1
agent.sources.weblog.channels =
REPLACE-WITH-
CHANNEL-NAME
## Channels ########################################################
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100 agent.channels.memoryChannel.transactionCapacity = 100
## Sinks ###########################################################
agent.sinks.mycluster.type =REPLACE-WITH-CLUSTER-TYPE
agent.sinks.mycluster.hdfs.path=/user/root/flumedata
agent.sinks.mycluster.channel =REPLACE-WITH-CHANNEL-NAME
Save this file as logagent.conf and run with below command
# flume-ng agent –n agent –f logagent.conf &
We do need more information to know why things are working for you.
The short answer is that you need a Source to read your data from (maybe the spooling directory source), a Channel (memory channel if you don't need reliable storage) and the HDFS sink.
Update
The OP reports receiving the error message, "you must include conf file in flume class path".
You need to provide the conf file as an argument. You do so with the --conf-file parameter. For example, the command line I use in development is:
bin/flume-ng agent --conf-file /etc/flume-ng/conf/flume.conf --name castellan-indexer --conf /etc/flume-ng/conf
The error message reads that way because the bin/flume-ng script adds the contents of the --conf-file argument to the classpath before running Flume.
If you are appending data to your local file, you can use an exec source with "tail -F" command. If the file is static, use cat command to transfer the data to hadoop.
The overall architecture would be:
Source: Exec source reading data from your file
Channel : Either memory channel or file channel
Sink: Hdfs sink where data is being dumped.
Use user guide to create your conf file (https://flume.apache.org/FlumeUserGuide.html)
Once you have your conf file ready, you can run it like this:
bin/flume-ng agent -n $agent_name -c conf -f conf/your-flume-conf.conf

Where is the Postgresql config file: 'postgresql.conf' on Windows?

I'm receiving this message but I can't find the postgresql.conf file:
OperationalError: could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "???" and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "???" and accepting
TCP/IP connections on port 5432?
On my machine:
C:\Program Files\PostgreSQL\8.4\data\postgresql.conf
postgresql.conf is located in PostgreSQL's data directory. The data directory is configured during the setup and the setting is saved as PGDATA entry in c:\Program Files\PostgreSQL\<version>\pg_env.bat, for example
#ECHO OFF
REM The script sets environment variables helpful for PostgreSQL
#SET PATH="C:\Program Files\PostgreSQL\<version>\bin";%PATH%
#SET PGDATA=D:\PostgreSQL\<version>\data
#SET PGDATABASE=postgres
#SET PGUSER=postgres
#SET PGPORT=5432
#SET PGLOCALEDIR=C:\Program Files\PostgreSQL\<version>\share\locale
Alternatively you can query your database with SHOW config_file; if you are a superuser.
You can find it by following this path
C:\Program Files\PostgreSQL\13\data
On my machine:
C:\Program Files (x86)\OpenERP 6.1-20121026-233219\PostgreSQL\data
you will get postgressql.conf file
C:/programfiles/postgressql/14/data
you will also get the pg_hba to check username password
PGDATA is assumed as ConfigDir under Postgresql, this works under docker and normal installation as well, this is default configuration until not changed explicitly.
on my docker PGDATA is configure as "/var/lib/postgresql/data" hence all configuration can be found under this directory.
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir' # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '' # write an extra PID file
# (change requires restart)

Resources