Should this sysmon configuration be fine tuned more? - windows

I am creating a sysmon configuration to implement in my lab environment. This environment is used to build replica networks for troubleshooting problems and testing different software. I have created this sysmon configuration file to monitor the lab.
<Sysmon schemaversion="4.50">
<HashAlgorithms>md5,sha256</HashAlgorithms>
<EventFiltering>
<RuleGroup name="" groupRelation="or">
<!-- Process Creation -->
<ProcessCreate onmatch="include">
<CommandLine condition="contains"> </CommandLine>
</ProcessCreate>
<!-- Process Termination -->
<ProcessTerminate onmatch="include" />
<!-- File Creation -->
<FileCreateStreamHash onmatch="include">
<Image condition="is">*</Image>
</FileCreateStreamHash>
<!-- File Deletion -->
<FileDelete onmatch="include">
<TargetFilename condition="is">*</TargetFilename>
</FileDelete>
<!-- Network Connection -->
<NetworkConnect onmatch="include" />
<!-- Registry Changes -->
<RegistryEvent onmatch="include">
<TargetObject condition="contains">HKLM\System\CurrentControlSet\Services\Tcpip\Parameters\Interfaces</TargetObject>
<Details condition="contains">SetValueKey</Details>
</RegistryEvent>
<!-- Process Tampering Events -->
<ProcessTampering onmatch="include" />
<!-- Driver Loaded -->
<DriverLoad onmatch="include" />
<!-- Create Remote Thread -->
<CreateRemoteThread onmatch="include" />
<!-- Raw Access Read -->
<RawAccessRead onmatch="include" />
<!-- Pipe Created Event -->
<PipeEvent onmatch="include" />
<!-- WMI Event -->
<WmiEvent onmatch="include" />
<!-- DNS Events -->
<DnsQuery onmatch="include" />
<!-- File Creation Time -->
<FileCreateTime onmatch="include" />
<!-- Process Changed -->
<ProcessTampering onmatch="include" />
<!-- Monitoring Logs -->
<FileCreate onmatch="include">
<TargetFilename condition="contains">log</TargetFilename>
</FileCreate>
<!-- Monitoring Registry files -->
<RegistryEvent onmatch="include">
<TargetObject condition="contains">HKLM\System\CurrentControlSet\Services\Tcpip\Parameters\Interfaces</TargetObject>
<Details condition="contains">SetValueKey</Details>
</RegistryEvent>
<!-- Monitoring common malware hiding places -->
<FileCreate onmatch="include">
<TargetFilename condition="contains">AppData\Local\Temp</TargetFilename>
</FileCreate>
<FileCreate onmatch="include">
<TargetFilename condition="contains">ProgramData\Microsoft\Crypto\RSA\MachineKeys</TargetFilename>
</FileCreate>
</RuleGroup>
</EventFiltering>
</Sysmon>
Can any Microsoft wizards out there look it over for me and let me know if this looks sufficient and point me in a direction of how I should fine tune this config for my use case if not.
Thanks all!

Related

ActiveMQ poor performance on large number (tens of milions) of messages

CentOS 7
4 cores, 16G RAM, 500GB SSD
ActiveMQ 5.15.4
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.10+9)
ActiveMQ architecture
Machines 69 and 68 are built with network brokers. The activemq.xml of 68 and 69 are identical except for the relevant names such as broker names, hostname, IP, etc.
Topics in this architecture:
TOPIC_A_ALL_DELIVERED
TOPIC_A_ALL_RECEIVED
TOPIC_A_NN_DELIVERED, NN=00-23
TOPIC_A_NN_RECEIVED, NN=00-23
Persistence days: Current is 2 days, but the expected target is 7 days. This is how long to wait before expiring the message if a consumer doesn't connect back to retrieve it. I use this config: <timeStampingBrokerPlugin ttlCeiling="172800000" zeroExpirationOverride="172800000"/>.
Persistent store: kahadb, limit size is 200GB
Message format is XML, and each message size is about 1K-3K bytes.
Consumers
In addition to durably subscribing to TOPIC_A_ALL_XXXXXX each consumer also durably subscribes to other topics based on its requirements, but I have no idea how fast the consumers can consume the data. Neither do I know whether they actually subscribe all topics the need. I only know sometime some consumers stop receiving data due to debug their code and then connect back.
Producer
The producer is scheduled to run every 30 minutes. Whenever the producer works it only puts data to one single MQ server. The target MQ server depends on the protocol of failover connection.
Every time the producer will put over 800K quantity XML messages to topics starting with TOPIC_A. The XML contains a tag sit_id (00-23) and tag direction (DELIVERED or RECEIVED) so the producer will put each XML to its relevant topic based on the tag site_id and direction in the XML. The producer meanwhile put each XML into TOPIC_A_ALL_XXXXX based on the tag direction in each XML.
Based on the above data the average total quantity of message for each day is about 76,800,000.
Symptom
At beginning no messages are in kahadb. The speed for producer to put queue to a single MQ with one connection is up to 600~700 msgs/sec. As the amount of data increases the speed sending the message slows down. It slows to 3 msgs/sec and sometimes gets stuck. Whenever this situation happens the below situation can be observed:
From htop, one activemq process keeps consuming 100% of CPU (sometime up to 200%)
The free memory is normal (4~8GB at least)
Consumers receive nothing from both MQ servers.
The above situations almost happens everyday.
If I just wait there for 4-6 hours the MQ server will come back. The producer can send at 4xx~5xx msgs/sec, and then consumers can receive data.
This cycle keeps everyday. I really have no idea how to improve this situation. Any suggestions?
activemq.xml
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" />
<!-- Allows accessing the server log -->
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq-mdmcs02p-node1" dataDirectory="${activemq.data}" persistent="true" useJmx="true" populateJMSXUserID="true">
<destinations>
<topic physicalName="TOPIC_A.00.Delivered" />
<topic physicalName="TOPIC_A.00.Received" />
<topic physicalName="TOPIC_A.01.Delivered" />
<topic physicalName="TOPIC_A.01.Received" />
<topic physicalName="TOPIC_A.02.Delivered" />
<topic physicalName="TOPIC_A.02.Received" />
<topic physicalName="TOPIC_A.03.Delivered" />
<topic physicalName="TOPIC_A.03.Received" />
<topic physicalName="TOPIC_A.04.Delivered" />
<topic physicalName="TOPIC_A.04.Received" />
<topic physicalName="TOPIC_A.05.Delivered" />
<topic physicalName="TOPIC_A.05.Received" />
<topic physicalName="TOPIC_A.06.Delivered" />
<topic physicalName="TOPIC_A.06.Received" />
<topic physicalName="TOPIC_A.07.Delivered" />
<topic physicalName="TOPIC_A.07.Received" />
<topic physicalName="TOPIC_A.08.Delivered" />
<topic physicalName="TOPIC_A.08.Received" />
<topic physicalName="TOPIC_A.09.Delivered" />
<topic physicalName="TOPIC_A.09.Received" />
<topic physicalName="TOPIC_A.10.Delivered" />
<topic physicalName="TOPIC_A.10.Received" />
<topic physicalName="TOPIC_A.11.Delivered" />
<topic physicalName="TOPIC_A.11.Received" />
<topic physicalName="TOPIC_A.12.Delivered" />
<topic physicalName="TOPIC_A.12.Received" />
<topic physicalName="TOPIC_A.13.Delivered" />
<topic physicalName="TOPIC_A.13.Received" />
<topic physicalName="TOPIC_A.14.Delivered" />
<topic physicalName="TOPIC_A.14.Received" />
<topic physicalName="TOPIC_A.15.Delivered" />
<topic physicalName="TOPIC_A.15.Received" />
<topic physicalName="TOPIC_A.16.Delivered" />
<topic physicalName="TOPIC_A.16.Received" />
<topic physicalName="TOPIC_A.17.Delivered" />
<topic physicalName="TOPIC_A.17.Received" />
<topic physicalName="TOPIC_A.18.Delivered" />
<topic physicalName="TOPIC_A.18.Received" />
<topic physicalName="TOPIC_A.19.Delivered" />
<topic physicalName="TOPIC_A.19.Received" />
<topic physicalName="TOPIC_A.20.Delivered" />
<topic physicalName="TOPIC_A.20.Received" />
<topic physicalName="TOPIC_A.21.Delivered" />
<topic physicalName="TOPIC_A.21.Received" />
<topic physicalName="TOPIC_A.22.Delivered" />
<topic physicalName="TOPIC_A.22.Received" />
<topic physicalName="TOPIC_A.23.Delivered" />
<topic physicalName="TOPIC_A.23.Received" />
<topic physicalName="TOPIC_A_ALL.Delivered" />
<topic physicalName="TOPIC_A_ALL.Received" />
</destinations>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="false" memoryLimit="4096mb" enableAudit="false" expireMessagesPeriod="60000" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<deadLetterStrategy>
<sharedDeadLetterStrategy processExpired="false"/>
</deadLetterStrategy>
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
<networkBridgeFilterFactory>
<conditionalNetworkBridgeFilterFactory replayWhenNoConsumers="true"/>
</networkBridgeFilterFactory>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<virtualTopic name=">" prefix="TOPIC_A.*" selectorAware="false"/>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
-->
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.11.68:61616)" userName="admin" password="XXXXX" dynamicOnly="true" prefetchSize="1" />
</networkConnectors>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="true"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="/home/activemq/kahadb"
ignoreMissingJournalfiles="true"
checkForCorruptJournalFiles="true"
checksumJournalFiles="true"
enableJournalDiskSyncs="false"
/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="80 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="40 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
<sslContext>
<sslContext
keyStore="file:${activemq.base}conf/broker1.ks" keyStorePassword="P#ssw0rd"
/>
</sslContext>
-->
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=1048576000"/>
<transportConnector name="nio" uri="nio://0.0.0.0:61617?trace=true"/>
<!-- <transportConnector name="ssl" uri="ssl://0.0.0.0:61618?trace=true&transport.enabledProtocols=TLSv1.2"/> -->
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600&transport.transformer=jms"/>
<!-- <transportConnector name="amqp+ssl" uri="amqp://0.0.0.0:5673?maximumConnections=1000&wireFormat.maxFrameSize=104857600&transport.enabledProtocols=TLSv1.2"/> -->
<!-- <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> -->
<!-- <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> -->
<!-- <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> -->
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
<plugins>
<!-- 86,400,000 ms = 1 day -->
<timeStampingBrokerPlugin ttlCeiling="172800000" zeroExpirationOverride="172800000"/>
<jaasAuthenticationPlugin configuration="activemq" />
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<authorizationEntry queue=">" read="admins" write="admins" admin="admins" />
<authorizationEntry topic=">" read="admins" write="admins" admin="admins" />
<authorizationEntry topic="TOPIC_A.>" read="mdmsusers" write="mdmsusers" />
<authorizationEntry topic="TOPIC_A.Delivered" read="pwsusers" />
<authorizationEntry topic="ActiveMQ.Advisory.>" read="mdmsusers,pwsusers" write="mdmsusers,pwsusers" admin="mdmsusers,pwsusers"/>
</authorizationEntries>
<tempDestinationAuthorizationEntry>
<tempDestinationAuthorizationEntry read="admins" write="admins" admin="admins"/>
</tempDestinationAuthorizationEntry>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
I suspect you are hitting some sort of combination of flow control, fast producer, slow consumer, pending message limit or invalid client registration issue that needs to be sorted out. Perhaps even bug or optimization in that version of ActiveMQ.
Suggested steps:
Upgrade to latest 5.15.x. There are a lot of fixes and it does not make sense to troubleshoot against a build that old.
Enable Advisory Messages for advisoryForFastProducer and advisoryForSlowConsumer on the destinations. This will give you an ActiveMQ.Advisory topic to show when those scenarios occur.
Investigate the client and connections to make sure they are all properly registered with clientId and suscriptionName to get a durable subscription. Remember: 2 connections cannot share clientId+subscriptioName
Consider moving to Virtual Topics. This is where messages are sent to the topic and consumers read from queues. Much more flexibility and visibility over what is going on with flows. Also-- bonus it makes multi-broker shared subscriptions straight forward.

Websphere Liberty AdminCenter login does not work when custom JAAS context is defined

I am new to WebSphere Liberty profile. I am working on 17.0.0.4 version. What I am trying to achieve is to have custom JAAS login setup for the application. The application works fine on WebSphere 8.5.
I have reviewed so many link from IBM Knowledge Center for the same, but got result with either or with JAAS custom login, but not both of them together. With WebSphere 8.5 we are having level of hierarchy to decide which authentication mechanism goes where, but with Liberty if I setup Custom JAAS authentication mechanism then I can't login to WebSphere Liberty AdminCenter, and if I configure server.xml for , then it does not authenticate my application's user (because it is designed to get authenticate users via JAAS).
Here is my server.xml file.
<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">
<!-- Enable features -->
<featureManager>
<feature>adminCenter-1.0</feature>
<feature>ssl-1.0</feature>
<feature>webProfile-7.0</feature>
<feature>appSecurity-2.0</feature>
</featureManager>
<keyStore id="defaultKeyStore" password="WASLiberty" />
<!-- Admin Center username/password -->
<!--<quickStartSecurity userName="admin" userPassword="admin123" />-->
<!-- Define an Administrator and non-Administrator -->
<basicRegistry id="basic">
<user name="admin" password="{xor}PjsyNjFubWw=" /> <!-- Encoded version of "admin123" -->
<user name="nonadmin" password="nonadmin123" />
</basicRegistry>
<!-- Assign 'admin' to Administrator -->
<administrator-role>
<user>admin</user>
</administrator-role>
<!-- JNDI Connection configuration -->
<dataSource id="MY_CUSTOM_DS" jndiName="jdbc/MY_CUSTOM_DS">
<jdbcDriver libraryRef="sqlserverjdbc"/>
<properties.microsoft.sqlserver databaseName="mydb"
serverName="localhost" portNumber="1433"
user="sa" password="root" />
</dataSource>
<!-- JDBC Driver file location -->
<library id="sqlserverjdbc">
<file name="${wlp.install.dir}/lib/sqljdbc4.jar"/>
</library>
<library id="MyLoginModuleLib">
<fileset dir="${wlp.install.dir}/lib" includes="custom_auth.jar"/>
</library>
<!-- JAAS Login Module for web application -->
<jaasLoginModule id="myCustom"
className="com.kana.auth.websphere.MyLoginModule"
controlFlag="REQUIRED" libraryRef="MyLoginModuleLib">
<options myOption1="value1" myOption2="value2"/>
</jaasLoginModule>
<!-- JAAS Login Context -->
<jaasLoginContextEntry id="system.WEB_INBOUND" name="system.WEB_INBOUND"
loginModuleRef="myCustom, hashtable, userNameAndPassword, certificate, token" />
<!-- To access this server from a remote client add a host attribute to the following element, e.g. host="*" -->
<httpEndpoint id="defaultHttpEndpoint"
host="*"
httpPort="9080"
httpsPort="9443" />
<webApplication contextRoot="mywebapp" location="mywebapp.war" />
<!-- Automatically expand WAR files and EAR files -->
<applicationManager autoExpand="true"/>
<!-- Enable remote file access -->
<remoteFileAccess>
<writeDir>${server.config.dir}</writeDir>
</remoteFileAccess>
</server>
Can anyone please point out where I am making mistake?
If you are using custom JAAS login module, then your custom JAAS login module need to ignore the admin center authentication and let the default login modules handle it.
The better option is to use custom TAI to handle application authentication and let the default login modules handle the admin center authentication.
Regards,
Ut Le
Many many thanks Ut Le for providing solution.
From the log file, found that the root cause was ClassNotFound exception. It does not have class which I was pointing to from <jaasLoginModule>. Later on found that I was pointing to jar file which is not at that location. So that was a silly mistake I made for configuration.

Installing Icecast on OS X: Homebrew or MacPorts?

I've been trying to run an Icecast server for a while, without success.
I've installed it with Homebrew and MacPorts, but can't make it run either way.
When I enter icecast -c ~/.icecast.xml, Terminal doesn't return anything, and it stays like that indefinitely, unless I [1]+ Stopped it.
My config file icecast.xml was downloaded from this tutorial, though I'm using OS X Mavericks (10.9).
I haven't changed a single line of code from it. I'm stuck.
Any ideas?
<icecast>
<!-- location and admin are two arbitrary strings that are e.g. visible
on the server info page of the icecast web interface
(server_version.xsl). -->
<location>Earth</location>
<admin>admin#localhost</admin>
<limits>
<clients>100</clients>
<sources>2</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<client-timeout>30</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>10</source-timeout>
<!-- If enabled, this will provide a burst of data when a client
first connects, thereby significantly reducing the startup
time for listeners that do substantial buffering. However,
it also significantly increases latency between the source
client and listening client. For low-latency setups, you
might want to disable this. -->
<burst-on-connect>1</burst-on-connect>
<!-- same as burst-on-connect, but this allows for bre eing more
specific on how much to burst. Most people won't need to
change from the default 64k. Applies to all mountpoints -->
<burst-size>65535</burst-size>
</limits>
<authentication>
<!-- Sources log in with username 'source' -->
<source-password>hackme</source-password>
<!-- Relays log in username 'relay' -->
<relay-password>hackme</relay-password>
<!-- Admin logs in with the username given below -->
<admin-user>admin</admin-user>
<admin-password>hackme</admin-password>
</authentication>
<!-- set the mountpoint for a shoutcast source to use, the default if not
specified is /stream but you can change it here if an alternative is
wanted or an extension is required
<shoutcast-mount>/live.nsv</shoutcast-mount>
-->
<!-- Uncomment this if you want directory listings -->
<!--
<directory>
<yp-url-timeout>15</yp-url-timeout>
<yp-url>http://dir.xiph.org/cgi-bin/yp-cgi</yp-url>
</directory>
-->
<!-- This is the hostname other people will use to connect to your server.
It affects mainly the urls generated by Icecast for playlists and yp
listings. -->
<hostname>stream.myhouse.com</hostname>
<!-- You may have multiple <listener> elements -->
<listen-socket>
<port>8000</port>
<!-- <bind-address>127.0.0.1</bind-address> -->
<!-- <shoutcast-mount>/stream</shoutcast-mount> -->
</listen-socket>
<!--
<listen-socket>
<port>8001</port>
</listen-socket>
-->
<!--<master-server>127.0.0.1</master-server>-->
<!--<master-server-port>8001</master-server-port>-->
<!--<master-update-interval>120</master-update-interval>-->
<!--<master-password>hackme</master-password>-->
<!-- setting this makes all relays on-demand unless overridden, this is
useful for master relays which do not have <relay> definitions here.
The default is 0 -->
<!--<relays-on-demand>1</relays-on-demand>-->
<!--
<relay>
<server>127.0.0.1</server>
<port>8001</port>
<mount>/example.ogg</mount>
<local-mount>/different.ogg</local-mount>
<on-demand>0</on-demand>
<relay-shoutcast-metadata>0</relay-shoutcast-metadata>
</relay>
-->
<!-- Only define a <mount> section if you want to use advanced options,
like alternative usernames or passwords
<mount>
<mount-name>/example-complex.ogg</mount-name>
<username>othersource</username>
<password>hackmemore</password>
<max-listeners>1</max-listeners>
<dump-file>/tmp/dump-example1.ogg</dump-file>
<burst-size>65536</burst-size>
<fallback-mount>/example2.ogg</fallback-mount>
<fallback-override>1</fallback-override>
<fallback-when-full>1</fallback-when-full>
<intro>/example_intro.ogg</intro>
<hidden>1</hidden>
<no-yp>1</no-yp>
<authentication type="htpasswd">
<option name="filename" value="myauth"/>
<option name="allow_duplicate_users" value="0"/>
</authentication>
<on-connect>/home/icecast/bin/stream-start</on-connect>
<on-disconnect>/home/icecast/bin/stream-stop</on-disconnect>
</mount>
<mount>
<mount-name>/auth_example.ogg</mount-name>
<authentication type="url">
<option name="mount_add"
value="http://myauthserver.net/notify_mount.php"/>
<option name="mount_remove"
value="http://myauthserver.net/notify_mount.php"/>
<option name="listener_add"
value="http://myauthserver.net/notify_listener.php"/>
<option name="listener_remove"
value="http://myauthserver.net/notify_listener.php"/>
</authentication>
</mount>
-->
<fileserve>1</fileserve>
<paths>
<!-- basedir is only used if chroot is enabled -->
<basedir>/usr/local/Cellar/icecast/2.3.3/share/icecast</basedir>
<!-- Note that if <chroot> is turned on below, these paths must both
be relative to the new root, not the original root -->
<logdir>/usr/local/Cellar/icecast/2.3.3/var/log/icecast</logdir>
<webroot>/usr/local/Cellar/icecast/2.3.3/share/icecast/web</webroot>
<adminroot>/usr/local/Cellar/icecast/2.3.3/share/icecast/admin</adminroot>
<!-- <pidfile>/usr/local/Cellar/icecast/2.3.3/share/icecast/icecast.pid</pidfile> - ->
<!-- Aliases: treat requests for 'source' path as being for 'dest' path
May be made specific to a port or bound address using the "port"
and "bind-address" attributes.
-->
<!--
<alias source="/foo" destination="/bar"/>
-->
<!-- Aliases: can also be used for simple redirections as well,
this example will redirect all requests for http://server:port/ to
the status page
-->
<alias source="/" destination="/status.xsl"/>
</paths>
<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<!-- <playlistlog>playlist.log</playlistlog> -->
<loglevel>3</loglevel> <!-- 4 Debug, 3 Info, 2 Warn, 1 Error -->
<logsize>10000</logsize> <!-- Max size of a logfile -->
<!-- If logarchive is enabled (1), then when logsize is reached
the logfile will be moved to [error|access|playlist].log.DATESTAMP,
otherwise it will be moved to [error|access|playlist].log.old.
Default is non-archive mode (i.e. overwrite)
-->
<!-- <logarchive>1</logarchive> -->
</logging>
<security>
<!-- <chroot>0</chroot> -->
<!--
<changeowner>
<user>nobody</user>
<group>nogroup</group>
</changeowner>
-->
</security>
</icecast>
If you install Icecast using Homebrew, the default configuration file will be placed at /usr/local/etc/icecast.xml, just edit it as you need.
I did not read your config, but it might be that there is an error in it somewhere, so just use the example config that is shipped with Icecast.

Is there any .runsettings documentation?

I'm looking for documentation for .runsettings files as used with vstest. Is there an xsd?
I can't seem to find much except for a few example files in the msdn documentation.
Runsettings (VS2012) are similar to testsettings (VS2010) where testsettings is specific to tests written for MSTest. VS2012 supports settings for different adapters. As such, the schema isn't a closed system so an XSD would not be comprehensive.
This MSDN article lists some high level details (https://msdn.microsoft.com/en-us/library/jj635153.aspx) for the elements in the runsettings file.
Here's a snippet from that article.
<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<!-- Configurations that affect the Test Framework -->
<RunConfiguration>
<MaxCpuCount>1</MaxCpuCount>
<!-- Path relative to solution directory -->
<ResultsDirectory>.\TestResults</ResultsDirectory>
<!-- [x86] | x64
- You can also change it from menu Test, Test Settings, Default
Processor Architecture -->
<TargetPlatform>x86</TargetPlatform>
<!-- Framework35 | [Framework40] | Framework45 -->
<TargetFrameworkVersion>Framework40</TargetFrameworkVersion>
<!-- Path to Test Adapters -->
<TestAdaptersPaths>%SystemDrive%\Temp\foo;%SystemDrive%\Temp\bar</TestAdaptersPaths>
</RunConfiguration>
<!-- Configurations for data collectors -->
<DataCollectionRunSettings>
<DataCollectors>
<DataCollector friendlyName="Code Coverage" uri="datacollector://Microsoft/CodeCoverage/2.0" assemblyQualifiedName="Microsoft.VisualStudio.Coverage.DynamicCoverageDataCollector, Microsoft.VisualStudio.TraceCollector, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
<Configuration>
<CodeCoverage>
<ModulePaths>
<Exclude>
<ModulePath>.*CPPUnitTestFramework.*</ModulePath>
</Exclude>
</ModulePaths>
</CodeCoverage>
</Configuration>
</DataCollector>
</DataCollectors>
</DataCollectionRunSettings>
<!-- Parameters used by tests at runtime -->
<TestRunParameters>
<Parameter name="webAppUrl" value="http://localhost" />
<Parameter name="webAppUserName" value="Admin" />
<Parameter name="webAppPassword" value="Password" />
</TestRunParameters>
<!-- Adapter Specific sections -->
<!-- MSTest adapter -->
<MSTest>
<MapInconclusiveToFailed>True</MapInconclusiveToFailed>
<CaptureTraceOutput>false</CaptureTraceOutput>
<DeleteDeploymentDirectoryAfterTestRunIsComplete>False</DeleteDeploymentDirectoryAfterTestRunIsComplete>
<DeploymentEnabled>False</DeploymentEnabled>
<AssemblyResolution>
<Directory Path>"D:\myfolder\bin\" includeSubDirectories="false"/>
</AssemblyResolution>
</MSTest>
</RunSettings>
I found further information on runsettings (specific to code coverage) here:
http://msdn.microsoft.com/en-us/library/jj159530%28v=vs.110%29.aspx

Running JMS Bridge with HornetQ

I have two standalone HornetQ servers on same the machine! I followed jms-bridge example in HornetQ examples for configuring source server and target server (I copied configurations from example to my servers). When i want running target server (that contains jms bridge) It can not find TransactionManager property of JMSBridge bean, because com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionManagerImple is not in server classpath. What should i choose instead of this implementation of TransactionManager? Or what jar files required for com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionManagerImple?
<!-- The JMS Bridge -->
<bean name="JMSBridge" class="org.hornetq.jms.bridge.impl.JMSBridgeImpl">
<constructor>
<!-- Source ConnectionFactory Factory -->
<parameter>
<inject bean="SourceCFF"/>
</parameter>
<!-- Target ConnectionFactory Factory -->
<parameter>
<inject bean="TargetCFF"/>
</parameter>
<!-- Source DestinationFactory -->
<parameter>
<inject bean="SourceDestinationFactory"/>
</parameter>
<!-- Target DestinationFactory -->
<parameter>
<inject bean="TargetDestinationFactory"/>
</parameter>
<!-- Source username (no username here) -->
<parameter><null /></parameter>
<!-- Source password (no password here)-->
<parameter><null /></parameter>
<!-- Target username (no username here)-->
<parameter><null /></parameter>
<!-- Target password (no password here)-->
<parameter><null /></parameter>
<!-- Selector -->
<parameter><null /></parameter>
<!-- Interval to retry in case of failure (in ms) -->
<parameter>5000</parameter>
<!-- Maximum number of retries to connect to the source and target -->
<parameter>10</parameter>
<!-- Quality of service -->
<parameter>ONCE_AND_ONLY_ONCE</parameter>
<!-- Maximum batch size -->
<parameter>1</parameter>
<!-- Maximum batch time (-1 means infinite) -->
<parameter>-1</parameter>
<!-- Subscription name (no subscription name here)-->
<parameter><null /></parameter>
<!-- client ID (no client ID here)-->
<parameter><null /></parameter>
<!-- concatenate JMS messageID to the target's message header -->
<parameter>true</parameter>
<!-- register the JMS Bridge in the JMX MBeanServer -->
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>org.hornetq:service=JMSBridge</parameter>
</constructor>
<property name="transactionManager">
<inject bean="TransactionManager"/>
</property>
<!-- HornetQ JMS Server must be started before the bridge -->
<depends>JMSServerManager</depends>
</bean>
<!-- TransactionManager -->
<bean name="TransactionManager" class="com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionManagerImple">
</bean>
you could chek out the new HornetQ book at
HornetQ Messaging Developer's Guide
I added these jar files and error gone:
jta.jar
narayana-jta.jar

Resources