issue with nxlog - logs arrived tagged as USER.NOTICE - rsyslog

First question here... I'm struggling with nxlog that behaves strangely: some logs doesn't enter my rsyslog fromhost-ip filtering rules when other work neatly, with the exact same config file, and fall down on the user.log file...
For what I have seen with tcpdump, it seems that the erroneous logs arrive already with some USER.NOTICE tag :
And even more, from the same machine, some logs arrive with the tag and some other without...
Can you give me some tracks to troubleshoot that?
here is the nxlog.conf:
Panic Soft
#NoFreeOnExit TRUE
define ROOT C:\Program Files\nxlog
define CERTDIR %ROOT%\cert
define CONFDIR %ROOT%\conf\nxlog.d
define LOGDIR %ROOT%\data
define LOGFILE %LOGDIR%\nxlog.log
LogFile %LOGFILE%
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
<Extension _syslog>
Module xm_syslog
</Extension>
<Extension _charconv>
Module xm_charconv
AutodetectCharsets iso8859-2, utf-8, utf-16, utf-32
</Extension>
<Extension _exec>
Module xm_exec
</Extension>
<Extension _fileop>
Module xm_fileop
# Check the size of our log file hourly, rotate if larger than 5MB
<Schedule>
Every 1 hour
Exec if (file_exists('%LOGFILE%') and \
(file_size('%LOGFILE%') >= 5M)) \
file_cycle('%LOGFILE%', 8);
</Schedule>
# Rotate our log file every week on Sunday at midnight
<Schedule>
When #weekly
Exec if file_exists('%LOGFILE%') file_cycle('%LOGFILE%', 8);
</Schedule>
</Extension>
# Snare compatible example configuration
# Collecting event log
<Input in>
Module im_msvistalog
</Input>
#
# Converting events to Snare format and sending them out over TCP syslog
<Output out>
Module om_udp
Host SYSLOG_IP
Port 514
Exec to_syslog_snare();
</Output>
# Connect input 'in' to output 'out'
<Route 1>
Path in => out
</Route>
Thanks in advance!

Related

How to force spring to do not zip log files, instead append date at end of log file

Currently I'm using spring boot logs and I'm configuring it through property file
below are the sample logging property
spring.main.banner-mode=off
logging.level.root= INFO,ERROR,DEBUG
logging.level.org.springframework.web= ERROR
logging.level.com.concretepage= DEBUG
logging.pattern.console=
logging.file = D://logTest.log
logging.file.max-size=100MB
spring.output.ansi.enabled=ALWAYS
The problem is that log file backup format is in .gz format
like logTest.log.2019-06-14.0.gz
How do I exclude default zipping ?
I don't want to hard wire configuration in xml file and put it inside resource folder.
I can only put rolling appender configuration xml file, but I want to make logging file path in property file, So I can dynamically set it for different environment.
Is there any way to achieve this configuration?
As an alternative to #simon-martinelli's correct answer, if you did not wish to use a custom logback-spring.xml file, the Spring configuration parameter logging.pattern.rolling-file-name can be set in your application.properties or application.yml file.
For example, to disable the compression used, remove the .gz suffix from the default file name pattern (${LOG_FILE}.%d{yyyy-MM-dd}.%i.gz as per https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-custom-log-configuration).
This would require the addition of the following element to the application.yml file:
logging:
pattern:
rolling-file-name: "${LOG_FILE}.%d{yyyy-MM-dd}.%i"
Or if you are using application.properties, it would be:
logging.pattern.rolling-file-name = ${LOG_FILE}.%d{yyyy-MM-dd}.%i
Create a logback-spring.xml file in src/main/resources
With this content
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="FILE"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<cleanHistoryOnStart>${LOG_FILE_CLEAN_HISTORY_ON_START:-false}</cleanHistoryOnStart>
<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
<maxHistory>${LOG_FILE_MAX_HISTORY:-7}</maxHistory>
<totalSizeCap>${LOG_FILE_TOTAL_SIZE_CAP:-0}</totalSizeCap>
</rollingPolicy>
</appender>
</configuration>
If the fileNamePattern don't end with gz (or any other compression format) logback will not compress the files.

How to boot up from CDROM(ISO image) to install the guest OS using virsh

UPDATE on Oct. 4th, 2017: See my answer below. The credit goes to DanielB as I wouldn't have solved the problem without Daniel's help, so I'll accept his answer instead of my own.
I'm a novice in libvirt as well as system administration so excuse me if I'm asking stupid questions, though I've tried to do as much homework as possible beforehand.
My question is: How to boot up from CDROM to install the guest OS right after creating a VM using virsh?
I'm working on Ubuntu Desktop 14.04, virsh 1.2.2.
When I used 'virt-install' and passed the ISO file path as its '--cdrom' argument, I could successfully bring up the virt-viewer window which allowed me to go through the guest OS installation.
As I know I can also create a VM using an XML definition, I dumped the XML definition of the VM which I created using 'virt-install'. I then expected that the 'virt-viewer' window would be brought up automatically when I started the VM so I could install the guest OS. But it didn't.
Below is the XML definition of my VM.
If I enable the loader line, as I marked as "suspicious" below, I would get an error message of "error: internal error: cannot load AppArmor profile 'libvirt-1092d51d-3b66-46a2-bf9b-71e13dc91799'". I did that because I was trying the example given in libvirt's document here.
However, if I disable the "loader" line, and ran virsh create def_domain_test.xml, the domain can be created successfully and is shown as 'running', but the virt-viewer window is not brought up, so I can't install the guest OS on the VM.
Could anyone help me on that? I don't understand why 'virt-install' can bring up the virt-viewer but my XML definition can't. I probably mis-configured the domain XML definition but I couldn't figure out which specific part I was wrong even if I'd tried to read as much documentation as possible.
Feel free to ask for more details if needed.
<!-- Let's call this file 'def_domain_test.xml' -->
<domain type='kvm'>
<name>vm_c2</name>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>
<!-- Next line is suspicious! -->
<!-- <loader readonly='yes' secure='no' type='rom'>/usr/lib/xen-4.4/boot/hvmloader</loader> -->
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<!-- Here is the hard drive that doesn't have OS installed. -->
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/me/me/testing/vm/pool/mvs_vol_c2'/>
<target dev='hda' bus='ide'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<!-- Here is the Ubuntu ISO. -->
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/me/me/testing/vm/ubuntu-14.04.5-server-amd64.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='network'>
<source network='default'/>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes'/>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</memballoon>
</devices>
</domain>
virsh is a very low level tool whose commands direct map to individual libvirt API calls. A installation done by virt-install will make many API calls to accomplish its job. So just taking the final XML of the installed guest and passing it to virsh define is not equivalant.
For a start, virt-install will usually change the XML - it first creates a transient guest will an XML doc suitable for booting off the CDROM, and after that completes it'll change the XML to boot off the disk instead. virt-install will manually launch virt-viewer to display the console, which is not something virsh does.
That particular <loader> line should never be used with KVM - it is only relevant for Xen - by using that you've told KVM to run Xen paravirt code as its BIOS instead of SeaBIOS - this will certainly crash & burn.
If you use the '--debug' arg to virt-install you'll see details of what it does at each step. You could also set LIBVIRT_LOG_FILTERS=1:libvirt and LIBVIRT_LOG_OUTPUTS=1:stderr if you want to see details of every libvirt API call made.
Thanks for DanielB's help! The '--debug' argument of virt-install does reveal the information I need to solve this problem.
First of all, in the XML definition, I don't need the <loader> line. The <os> section should be:
<os>
<type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
The two <boot> tags already specify the order of booting.
Secondly, virt-install's debug output suggests the desired way to bring up virt-viewer:
Run: virt-viewer --connect=qemu:///system --wait vm_c2
Optionally, you can add '--debug' and '--verbose' to virt-viewer to see more output.
At this moment, the viewer window should be brought up and a message is shown: Waiting for guest domain to be created.
Run: virsh create --file def_domain_test.xml.
The viewer would now be activated and display the output of the VM.
Here is one caveat which caused me to get stuck at the beginning: You don't have to start virt-viewer prior to the VM. However, if you start VM first and then the viewer, the viewer screen may be a whole black one which may confuse you and make you think nothing is happening there. In this case, click the viewer window to let it get the input focus, then press 'Enter' key and you may get it refreshed and see what is there actually. (Resizing the window doesn't force it to refresh.)
FYI: If you output the virt-viewer's debug messages, you may see a message like this:
(virt-viewer:6296): virt-viewer-DEBUG: Error operation forbidden: read only access prevents virDomainOpenGraphics
This doesn't seem to cause me any problems, but maybe it's the hint of other problems if virt-viewer doesn't work correctly for you.

What are the steps used to generate the Erlangen map?

I have imported a .osm file from QGIS and then I used sumo-0.22.0 to generate .net.xml ; .poly.xml and .rou.xml files since I use Veins-4a2. When I simulated the scenario of Veins, the application layer of RSU did not executed. So I need to understand how the Erlangen files was done because the problem is my scenario (my files).
Can you tell me please what are the steps used to generated the .net.xml ; .poly.xml and .rou.xml?
You can always have a look at the xml files with a text editor or even a simple pager. Almost all sumo tools write a header to their output files which tells the version, the options and the date of creation. In case of the erlangen network this says:
<!-- generated on Wed Nov 30 12:18:33 2011 by SUMO netconvert Version 0.13.1
<?xml version="1.0" encoding="iso-8859-1"?>
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://sumo.sf.net/xsd/netconvertConfiguration.xsd">
<input>
<type-files value="erlangen.edgetypes.xml"/>
<osm-files value="erlangen.osm"/>
</input>
<output>
<output-file value="erlangen.net.xml"/>
</output>
<projection>
<proj.utm value="true"/>
</projection>
<edge_removal>
<remove-edges.isolated value="true"/>
</edge_removal>
<processing>
<osm.discard-tls value="true"/>
<no-turnarounds value="false"/>
<offset.disable-normalization value="true"/>
<roundabouts.guess value="true"/>
<junctions.join value="true"/>
</processing>
</configuration>
-->
which mentions all the information asked for (hopefully). The route file has no such header, so I suppose, it is handmade.

Apache Falcon: Setting up a data pipeline in an actual cluster [Falied to load Data, Error: 400 Bad request]

I am trying to implement the data pipeline example by HotonWorks in an actual cluster. I have the HDP 2.2 version installed in my cluster but am getting the following error in the UI for the processes and Datasets tabs
Failed to load data. Error: 400 Bad Request
I have all services running except for HBase, Kafka, Knox, Ranger, Slider and Spark.
I have read the falcon entity specification that describes the individual tags for the cluster, feed and process definitions and modified the xml configuration files for the feeds and processes as follows
Cluster Definition
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cluster name="primaryCluster" description="Analytics1" colo="Bangalore" xmlns="uri:falcon:cluster:0.1">
<interfaces>
<interface type="readonly" endpoint="hftp://node3.com.analytics:50070" version="2.6.0"/>
<interface type="write" endpoint="hdfs://node3.com.analytics:8020" version="2.6.0"/>
<interface type="execute" endpoint="node1.com.analytics:8050" version="2.6.0"/>
<interface type="workflow" endpoint="http://node1.com.analytics:11000/oozie/" version="4.1.0"/>
<interface type="messaging" endpoint="tcp://node1.com.analytics:61616?daemon=true" version="5.1.6"/>
</interfaces>
<locations>
<location name="staging" path="/user/falcon/primaryCluster/staging"/>
<location name="working" path="/user/falcon/primaryCluster/working"/>
</locations>
<ACL owner="falcon" group="hadoop"/>
</cluster>
Feed Definitions
RawEmailFeed
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<feed name="rawEmailFeed" description="Raw customer email feed" xmlns="uri:falcon:feed:0.1">
<tags>externalSystem=USWestEmailServers,classification=secure</tags>
<groups>churnAnalysisDataPipeline</groups>
<frequency>hours(1)</frequency>
<timezone>UTC</timezone>
<late-arrival cut-off="hours(4)"/>
<clusters>
<cluster name="primaryCluster" type="source">
<validity start="2014-02-28T00:00Z" end="2016-03-31T00:00Z"/>
<retention limit="days(3)" action="delete"/>
</cluster>
</clusters>
<locations>
<location type="data" path="/user/falcon/input/enron/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
<location type="stats" path="/none"/>
<location type="meta" path="/none"/>
</locations>
<ACL owner="falcon" group="users" permission="0755"/>
<schema location="/none" provider="none"/>
</feed>
cleansedEmailFeed
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<feed name="cleansedEmailFeed" description="Cleansed customer emails" xmlns="uri:falcon:feed:0.1">
<tags>owner=USMarketing,classification=Secure,externalSource=USProdEmailServers,externalTarget=BITools</tags>
<groups>churnAnalysisDataPipeline</groups>
<frequency>hours(1)</frequency>
<timezone>UTC</timezone>
<clusters>
<cluster name="primaryCluster" type="source">
<validity start="2014-02-28T00:00Z" end="2016-03-31T00:00Z"/>
<retention limit="days(10)" action="delete"/>
</cluster>
</clusters>
<locations>
<location type="data" path="/user/falcon/processed/enron/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
</locations>
<ACL owner="falcon" group="users" permission="0755"/>
<schema location="/none" provider="none"/>
</feed>
Process Definitions
rawEmailIngestProcess
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<process name="rawEmailIngestProcess" xmlns="uri:falcon:process:0.1">
<tags>pipeline=churnAnalysisDataPipeline,owner=ETLGroup,externalSystem=USWestEmailServers</tags>
<clusters>
<cluster name="primaryCluster">
<validity start="2014-02-28T00:00Z" end="2016-03-31T00:00Z"/>
</cluster>
</clusters>
<parallel>1</parallel>
<order>FIFO</order>
<frequency>hours(1)</frequency>
<timezone>UTC</timezone>
<outputs>
<output name="output" feed="rawEmailFeed" instance="now(0,0)"/>
</outputs>
<workflow name="emailIngestWorkflow" version="2.0.0" engine="oozie" path="/user/falcon/apps/ingest/fs"/>
<retry policy="periodic" delay="minutes(15)" attempts="3"/>
<ACL owner="falcon" group="hadoop"/>
</process>
cleanseEmailProcess
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<process name="cleanseEmailProcess" xmlns="uri:falcon:process:0.1">
<tags>pipeline=churnAnalysisDataPipeline,owner=ETLGroup</tags>
<clusters>
<cluster name="primaryCluster">
<validity start="2014-02-28T00:00Z" end="2016-03-31T00:00Z"/>
</cluster>
</clusters>
<parallel>1</parallel>
<order>FIFO</order>
<frequency>hours(1)</frequency>
<timezone>UTC</timezone>
<inputs>
<input name="input" feed="rawEmailFeed" start="now(0,0)" end="now(0,0)"/>
</inputs>
<outputs>
<output name="output" feed="cleansedEmailFeed" instance="now(0,0)"/>
</outputs>
<workflow name="emailCleanseWorkflow" version="5.0" engine="pig" path="/user/falcon/apps/pig/id.pig"/>
<retry policy="periodic" delay="minutes(15)" attempts="3"/>
<ACL owner="falcon" group="hadoop"/>
</process>
I have not made any changes to the ingest.sh, workflow.xml and id.pig files. They are present in hdfs location /user/falcon/apps/ingest/fs (ingest.sh and workflow.xml) and /user/falcon/apps/pig (id.pig). Also I was not sure if the hidden .DS_Store file was required and hence did not include them in the aforementioned hdfs locations.
ingest.sh
#!/bin/bash
# curl -sS http://sandbox.hortonworks.com:15000/static/wiki-data.tar.gz | tar xz && hadoop fs -mkdir -p $1 && hadoop fs -put wiki-data/*.txt $1
curl -sS http://bailando.sims.berkeley.edu/enron/enron_with_categories.tar.gz | tar xz && hadoop fs -mkdir -p $1 && hadoop fs -put enron_with_categories/*/*.txt $1
workflow.xml
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>ingest.sh</exec>
<argument>${feedInstancePaths}</argument>
<file>${wf:appPath()}/ingest.sh#ingest.sh</file>
<!-- <file>/tmp/ingest.sh#ingest.sh</file> -->
<!-- <capture-output/> -->
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
id.pig
A = load '$input' using PigStorage(',');
B = foreach A generate $0 as id;
store B into '$output' USING PigStorage();
I am not exactly sure how the process flow is taking place for the HDP example and would really appreciate if someone could clear that up.
Specifically, I do not understand the source of the arguments $1 given to ingest.sh. I believe it the hdfs location where the incoming data is to be stored. I noticed that workflow.xml has the tag <argument>${feedInstancePaths}</argument>.
Where does feedInstancePaths get its value from? I guess I'm getting the error because the feed is not getting stored in the proper location. But it may be a different problem.
The user Falcon also has 755 permission on all hdfs directories under /user/falcon
Any help and suggestions would be appreciated.
You are running your own cluster but this tutorial need the ressources asigned in the shellscript (ingest.sh):
curl -sS http://sandbox.hortonworks.com:15000/static/wiki-data.tar.gz
I guess your cluster is not addressed at sandbox.hortonworks.com and further you dont have the needed ressource wiki-data.tar.gz. This tutorial only works with the offered sandbox.

External log with spring boot from an application.yml

I would like have an external log of my application, but I haven't managed to do it. I have the logaback.xml in resources folder:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<logger name="org.springframework.web" level="DEBUG"/>
</configuration>
And I have an application.yml with this content:
# ========================
# LOGGING
# ========================
#logging:
file: /tmp/application.log
# Enable this to specify the path and the name of the log file. By default, it creates a
# file named spring.log in the temp directory.
In the folder tmp the only one file is spring.log, with all logs from the application, but I need a file with other name and with a level debug in a different folder.
Anybody can help me?
Thank you!

Resources