How to use a single Log4Net instance in mutliple Nunit TestFixtures with SetUpFixture - events

I'm just getting into the use of Selenium Webdriver and its EventFiring so that I can log any exceptions thrown by the driver to a file or email etc.
I have got Log4Net working and my Unit Tests are running fine with Selenium.
What I am having issues with is having Log4Net create 1 log file, but for multiple test fixtures.
Here are some important classes which I think I need to show you in order to explain my issue.
public class EventLogger : EventFiringWebDriver
{
// Not sure if this is the best place to declare Log4Net
public static readonly ILog Log = LogManager.GetLogger(typeof(EventLogger));
public EventLogger(IWebDriver parentDriver) : base(parentDriver)
{
// To get Log4Net to read the configuration file on what logger to use
// To console , file, email etc
XmlConfigurator.Configure();
if (Log.IsInfoEnabled)
{
Log.Info("Logger started.");
}
}
protected override void OnFindingElement(FindElementEventArgs e)
{
base.OnFindingElement(e);
//TODO:
if (Log.IsInfoEnabled)
{
Log.InfoFormat("OnFindingElement: {0}", e);
}
}
protected override void OnElementClicked(WebElementEventArgs e)
{
base.OnElementClicked(e);
//TODO:
if (Log.IsInfoEnabled)
{
Log.InfoFormat("OnElementClicked: {0}", e.Element.GetAttribute("id"));
}
}
}
Here is my SetupFixture - which I THINK is run every time a new TestFixture class is run.
[SetUpFixture]
public class BaseTest
{
protected static readonly ILog Log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
private FirefoxProfile firefoxProfile;
private IWebDriver driver;
private EventLogger eventLogger;
public IWebDriver StartDriver()
{
Common.WebBrowser = ConfigurationManager.AppSettings["WebBrowser"];
Log.Info("Browser: " + Common.WebBrowser);
switch (Common.WebBrowser)
{
case "firefox":
{
firefoxProfile = new FirefoxProfile { AcceptUntrustedCertificates = true };
driver = new FirefoxDriver(firefoxProfile);
break;
}
case "iexplorer":
{
driver = new InternetExplorerDriver();
break;
}
case "chrome":
{
driver = new ChromeDriver();
break;
}
}
driver.Manage().Timeouts().ImplicitlyWait(Common.DefaultTimeSpan);
// Here is where I start my EventLogger to handle the events from selenium
// web driver, onClick, OnFindingElement etc.
// Is this the best way? Seems a bit messy, lack of structure
return eventLogger = new EventLogger(driver);
}
public EventLogger EventLogger
{
get { return eventLogger; }
}
}
Here is one of the many TestFixtures I have, each one based on a Selenium2 PageObjects
[TestFixture]
public class LoginPageTest : BaseTest
{
private IWebDriver driver;
private LoginPage loginPage;
[SetUp]
public void SetUp()
{
// Where I use the Log from the BaseTest
// protected static readonly ILog Log <-- top of BaseTest
Log.Info("SetUp");
driver = StartDriver();
driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(30));
loginPage = new LoginPage();
PageFactory.InitElements(driver, loginPage);
}
[Test]
public void SubmitFormInvalidCredentials()
{
Console.WriteLine("SubmitFormInvalidCredentials");
loginPage.UserName.SendKeys("invalid");
loginPage.Password.SendKeys("invalid");
loginPage.SubmitButton.Click();
IWebElement invalidCredentials = driver.FindElement(By.Id("ctl00_ctl00_ctl00_insideForm_insideForm_ctl02_title"));
Assert.AreEqual("Invalid user name or password", invalidCredentials.Text);
}
}
My Log.txt file is obviously being re-written over and over after each TestFixture is run,
How can I set up my NUnit Testing so that I only run the Log4Net once, so that I can use it in both my EventLogger and TestFixtures?
I have Googled around a lot, maybe its something simple. Do I have some design issues with the structure of my project?

Try out setting explicitly AppendToFile="True" in the log4net configuration for the FileAppender you are using:
<log4net>
<appender name="..." type="log4net.Appender....">
<appendToFile value="true" />
FileAppender.AppendToFile property
Gets or sets a flag that indicates whether the file should be appended
to or overwritten
Regarding [SetupFixture], I believe you are using it in wrong way. It not supposed to mark base class of the each TesFixture by this attribute, this looks messy. You should declare class which considered to be SetupFixture and mark it by [SetupFixture] attribute so it will be called ONCE for all TestFixtures within a given (declaration) namespace.
From NUnit documentation, SetUpFixtureAttribute:
This is the attribute that marks a class that contains the one-time
setup or teardown methods for all the test fixtures under a given
namespace. The class may contain at most one method marked with the
SetUpAttribute and one method marked with the TearDownAttribute

Related

net6 minimal web API override settings for test

Program.cs
WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
string foo = builder.Configuration.GetValue<string>("foo"); // Is null. Shoudn't be.
public partial class Program{}
Test project
public class MyWebApplicationFactory<TStartup> : WebApplicationFactory<TStartup> where TStartup : class
{
protected override void ConfigureWebHost(IWebHostBuilder builder)
{
builder.ConfigureAppConfiguration((context, configBuilder) =>
{
configBuilder.AddInMemoryCollection(
(new Dictionary<string, string?>
{
["foo"] = "bar"
}).AsEnumerable());
});
}
}
public class Test
{
private readonly HttpClient _client;
public Test(MyWebApplicationFactory<Program> factory)
{
_client = factory.WithWebHostBuilder().CreateClient();
But the new settings are never added -- when I debug the test foo is always null.
I don't see how the settings can added, either, because Program creates a new builder that never goes anywhere near the WebApplicationFactory.
I think this might be something to do with my Program needing to read settings in order to configure services, but the settings not being updated by the test system until after the services have been configured?
Can it be made to work?

//NonCompliant comment usage - SonarQube Custom Rule

I am trying to write a few SONARQUBE custom rules for my project.
After reading up the below document -
https://docs.sonarqube.org/display/PLUG/Writing+Custom+Java+Rules+101
and
https://github.com/SonarSource/sonar-custom-rules-examples,
I created a custom rule like these classes below -
The Rule file:
#Rule(key = "MyAssertionRule")
public class FirstSonarCustomRule extends BaseTreeVisitor implements JavaFileScanner {
private static final String DEFAULT_VALUE = "Inject";
private JavaFileScannerContext context;
/**
* Name of the annotation to avoid. Value can be set by users in Quality
* profiles. The key
*/
#RuleProperty(defaultValue = DEFAULT_VALUE, description = "Name of the annotation to avoid, without the prefix #, for instance 'Override'")
protected String name;
#Override
public void scanFile(JavaFileScannerContext context) {
this.context = context;
System.out.println(PrinterVisitor.print(context.getTree()));
scan(context.getTree());
}
#Override
public void visitMethod(MethodTree tree) {
List<StatementTree> statements = tree.block().body();
for (StatementTree statement : statements) {
System.out.println("KIND IS " + statement.kind());
if (statement.is(Kind.EXPRESSION_STATEMENT)) {
if (statement.firstToken().text().equals("Assert")) {
System.out.println("ERROR");
}
}
}
}
}
The Test class:
public class FirstSonarCustomRuleTest {
#Test
public void verify() {
FirstSonarCustomRule f = new FirstSonarCustomRule();
f.name = "ASSERTION";
JavaCheckVerifier.verify("src/test/files/FirstSonarCustom.java", f);
}
}
And finally - the Test file:
class FirstSonarCustom {
int aField;
public void methodToUseTestNgAssertions() {
Assert.assertTrue(true);
}
}
The above Test file would later be my Project's source code.
As per the SONAR documentation - the // Noncompliant is a mandatory comment in my Test file. Thus my first question is should I add this comment everywhere in my Source code too?
If yes - is there any way I can avoid adding this comment, because I do not want to add that code refactoring exercise all over.
Can someone suggest me what I need to do here?
I am using SONARQUBE 6.3.
This comment is only used by the test framework (JavaCheckVerifier class) to test the implementation of your rule. It is not mandatory in any way and for sure you don't need it in your real code.

how do I track metrics in jmeter for 'java requests' with sub results?

I am using jmeter with Java Request samplers. These call java classes I have written which returns a SampleResult object which contains the timing metrics for the use case. SampleResult is a tree and can have child SampleResult objects (SampleResult.addSubResult method). I cant seem to find a good way in jmeter to track the sub results so I can only easily get the results for the parent SampleResult.
Is there a listener in jmeter that allows me to see statistics / graphs for sub results (for instance see the average time across all sub results with the same name).
I have just succeeded in doing this, and wanted to share it. If you follow the instructions I provide here, it will work for you as well. I did this for the summary table listener. And, I did it on Windows. And, I used Eclipse
Steps:
Go to JMeter's web site and download the source code. You can find that here, for version 3.0.
http://jmeter.apache.org/download_jmeter.cgi
One there, I clicked the option to download the Zip file for the Source.
Then, on that same page, download the binary for version 3.0, if you have not already done so. Then, extract that zip file onto your hard drive.
Once you've extracted the zip file to your hard drive, grab the file "SummaryReport.java". It can be found here: "\apache-jmeter-3.0\src\components\org\apache\jmeter\visualizers\SummaryReport.java"
Create a new class in Eclipse, then Copy/Paste all of that code into your new class. Then, rename your class from what it is, "SummaryReport" to a different name. And everywhere in the code, replace "SummaryReport" with the new name of your class.
I am using Java 8. So, there is one line of code that won't compile for me. It's the line below.
private final Map tableRows = new ConcurrentHashMap<>();
You need to remove the <> on that line, as Java 1.8 doesn't support it. Then, it will compile
There was one more line that gave a compile error. It was the one below.
CSVSaveService.saveCSVStats(StatGraphVisualizer.getAllTableData(model, FORMATS),writer,`
saveHeaders.isSelected() ? StatGraphVisualizer.getLabels(COLUMNS) : null);
Firstly, it wasn't finding the source for class StatGraphVisualizer. So, I imported it, as below.
import org.apache.jmeter.visualizers.StatGraphVisualizer;
Secondly, it wasn't finding the method "getLabels" in "StatGraphVisualizer.getLabels." So, here is what this line of code looked like after I fixed it. It is seen below.
CSVSaveService.saveCSVStats(StatGraphVisualizer.getAllTableData(model, FORMATS),writer);
That compiles. That method doesn't need the second argument.
Now, everything should compile.
Find this method below. This is where you will begin adding your customizations.
#Override
public void add(final SampleResult res) {
You need to create an array of all of your sub results, as I did, as seen below. The line in Bold is the new code. (All new code is seen in Bold).
public void add(final SampleResult res) {
final String sampleLabel = res.getSampleLabel(); // useGroupName.isSelected());
**final SampleResult[] theSubResults = res.getSubResults();**
Then, create a String for each label for your sub results objects, as seen below.
**final String writesampleLabel = theSubResults[0].getSampleLabel(); // (useGroupName.isSelected());
final String readsampleLabel = theSubResults[1].getSampleLabel(); // (useGroupName.isSelected());**
Next, go to the method below.
JMeterUtils.runSafe(false, new Runnable() {
#Override
public void run() {
The new code added is below, in Bold.
JMeterUtils.runSafe(false, new Runnable() {
#Override
public void run() {
Calculator row = null;
**Calculator row1 = null;
Calculator row2 = null;**
synchronized (lock) {
row = tableRows.get(sampleLabel);
**row1 = tableRows.get(writesampleLabel);
row2 = tableRows.get(readsampleLabel);**
if (row == null) {
row = new Calculator(sampleLabel);
tableRows.put(row.getLabel(), row);
model.insertRow(row, model.getRowCount() - 1);
}
**if (row1 == null) {
row1 = new Calculator(writesampleLabel);
tableRows.put(row1.getLabel(), row1);
model.insertRow(row1, model.getRowCount() - 1);
}
if (row2 == null) {
row2 = new Calculator(readsampleLabel);
tableRows.put(row2.getLabel(), row2);
model.insertRow(row2, model.getRowCount() - 1);
}**
} // close lock
/*
* Synch is needed because multiple threads can update the counts.
*/
synchronized(row) {
row.addSample(res);
}
**synchronized(row1) {
row1.addSample(theSubResults[0]);
}**
**synchronized(row2) {
row2.addSample(theSubResults[1]);
}**
That is all that needs to be customized.
In Eclipse, export your new class into a Jar file. Then place it inside of the lib/ext folder of your binary of Jmeter that you extracted, from Step 1 above.
Start up Jmeter, as you normally would.
In your Java sampler, add a new Listener. You will now see two "Summary Table" listeners. One of these will be the new one that you have just created. Once you have brought that new one into your Java Sampler, rename it to something unique. Then run your test and look at your new "Summary Table" listener. You will see summary results/stats for all of your sample results.
My next step is to perform these same steps for all of the other Listeners that I would like to customize.
I hope that this post helps.
Here is some of my plugin code which you can use as a starting point in writing your own plugin. I cant really post everything as there are really dozens of classes. Few things to know are:
my plugin like all visualizer plugins extends the jmeter class
AbstractVisualizer
you need the following jars in eclipse to complile:
jfxrt.jar,ApacheJMeter_core.jar
you need java 1.8 for javafx (the jar file comes in the sdk)
if you compile a plugin you need to put that in jmeter/lib/ext.
You also need to put the jars from bullet 2 in jmeter/lib
there is a method called "add(SampleResult)" in my class. This
will get called by the jmeter framework every time a java sample
completes and will pass the SampleResult as a parameter. Assuming you
have your own Java Sample classes that extend
AbstractJavaSamplerClient your class will have a method called
runTest which returns a sampleresult. That same return object will be
passed into your plugins add method.
my plugin puts all the sample results into a buffer and only
updates the screen every 5 results.
Here is the code:
import java.awt.BorderLayout;
import java.util.ArrayList;
import java.util.List;
import javafx.application.Platform;
import javafx.embed.swing.JFXPanel;
import javax.swing.border.Border;
import javax.swing.border.EmptyBorder;
import org.apache.jmeter.samplers.SampleResult;
import org.apache.jmeter.testelement.TestStateListener;
import org.apache.jmeter.visualizers.gui.AbstractVisualizer;
public class FxVisualizer extends AbstractVisualizer implements TestStateListener {
int currentId = 0;
/**
*
*/
private static final long serialVersionUID = 1L;
private static final int BUFFER_SIZE = 5;
#Override
public String getName()
{
return super.getName();//"George's sub result viewer.";
}
#Override
public String getStaticLabel()
{
return "Georges FX Visualizer";
}
#Override
public String getComment()
{
return "George wrote this plugin. There are many plugins like it but this one is mine.";
}
static Long initCount = new Long(0);
public FxVisualizer()
{
init();
}
private void init()
{
//LoggingUtil.debug("in FxVisualizer init()");
try
{
FxTestListener.setListener(this);
this.setLayout(new BorderLayout());
Border margin = new EmptyBorder(10, 10, 5, 10);
this.setBorder(margin);
//this.add(makeTitlePanel(), BorderLayout.NORTH);
final JFXPanel fxPanel = new JFXPanel();
add(fxPanel);
//fxPanel.setScene(getScene());
Platform.runLater(new Runnable() {
#Override
public void run() {
initFX(fxPanel);
}
});
}
catch(Exception e)
{
e.printStackTrace();
}
}
static FxVisualizerScene fxScene;
private static void initFX(JFXPanel fxPanel) {
// This method is invoked on the JavaFX thread
fxScene = new FxVisualizerScene();
fxPanel.setScene(fxScene.getScene());
}
final List <Event> bufferedEvents = new ArrayList<Event>();
#Override
public void add(SampleResult result)
{
final List <Event> events = ...;//here you need to take the result.getSubResults() parameter and get all the children events.
final List<Event> eventsToAdd = new ArrayList<Event>();
synchronized(bufferedEvents)
{
for (Event evt : events)
{
bufferedEvents.add(evt);
}
if (bufferedEvents.size() >= BUFFER_SIZE)
{
eventsToAdd.addAll(bufferedEvents);
bufferedEvents.clear();
}
}
if (eventsToAdd.size() > 0)
{
Platform.runLater(new Runnable() {
#Override
public void run() {
updatePanel(eventsToAdd);
}
});
}
}
public void updatePanel(List <Event> events )
{
for (Event evt: events)
{
fxScene.addEvent(evt);
}
}
#Override
public void clearData()
{
synchronized(bufferedEvents)
{
Platform.runLater(new Runnable() {
#Override
public void run() {
bufferedEvents.clear();
fxScene.clearData();
}
});
}
}
#Override
public String getLabelResource() {
return "Georges Java Sub FX Sample Listener";
}
Boolean isRunning = false;
#Override
public void testEnded()
{
final List<Event> eventsToAdd = new ArrayList<Event>();
synchronized(bufferedEvents)
{
eventsToAdd.addAll(bufferedEvents);
bufferedEvents.clear();
}
if (eventsToAdd.size() > 0)
{
Platform.runLater(new Runnable() {
#Override
public void run() {
updatePanel(eventsToAdd);
fxScene.testStopped();
}
});
}
}
Long testCount = new Long(0);
#Override
public void testStarted() {
synchronized(bufferedEvents)
{
Platform.runLater(new Runnable() {
#Override
public void run() {
updatePanel(bufferedEvents);
bufferedEvents.clear();
fxScene.testStarted();
}
});
}
}
#Override
public void testEnded(String arg0)
{
//LoggingUtil.debug("testEnded 2:" + arg0);
testEnded();
}
int registeredCount = 0;
#Override
public void testStarted(String arg0) {
//LoggingUtil.debug("testStarted 2:" + arg0);
testStarted();
}
}
OK so I just decided to write my own jmeter plugin and it is dead simple. Ill share the code for posterity when it is complete. Just write a class that extends AbstractVisualizer, compile it into a jar, then throw it into the jmeter lib/ext directory. That plugin will show up in the listeners section of jmeter when you go to add visualizers.

What could cause a class implementing "ApplicationListener<ContextRefreshedEvent>" not to be notified of a "ContextRefreshedEvent"

I have a Spring application listener implementing ApplicationListener<ContextRefreshedEvent> as follows:
#Profile({ Profiles.DEFAULT, Profiles.CLOUD, Profiles.TEST, Profiles.DEV })
#Component
public class BootstrapLoaderListener implements ApplicationListener<ContextRefreshedEvent>, ResourceLoaderAware, Ordered {
private static final Logger log = Logger.getLogger(BootstrapLoaderListener.class);
#Override
public int getOrder() {
return HIGHEST_PRECEDENCE;
}
#Autowired
private DayToTimeSlotRepository dayToTimeSlotRepository;
#Autowired
private LanguageRepository languageRepository;
private ResourceLoader resourceLoader;
#Override
#Transactional
public void onApplicationEvent(ContextRefreshedEvent contextRefreshedEvent) {
initApplication();
}
private void initApplication() {
if (dayToTimeSlotRepository.count() == 0) {
initDayToTimeSlots();
}
if (languageRepository.count() == 0) {
initLanguages();
}
}
private void initDayToTimeSlots() {
for (Day day : Day.values()) {
for (TimeSlot timeSlot : TimeSlot.values()) {
DayToTimeSlot dayToTimeSlot = new DayToTimeSlot();
dayToTimeSlot.setDay(day);
dayToTimeSlot.setTimeSlot(timeSlot);
dayToTimeSlot.setDisabled(isDayToTimeSlotDisabled(timeSlot, day));
dayToTimeSlotRepository.save(dayToTimeSlot);
}
}
}
...
I rely on this listener class to insert reference data that is not updated nor deleted and I have a number of Spring integration tests that use this class, one of which fails because the listener is not notified (initDayToTimeSlots is not invoked).
I am trying to pinpoint where the problem comes from by debugging the tests and I noticed that when I run the problematic test class on its own, the tests contained in the class pass (indicating that the listener is notified) but when I run all of my application test classes together, the listener is not notified causing the test to fail (indicating that some other test changes/dirties the context).
Here is the problematic test class:
#ActiveProfiles({ Profiles.TEST })
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = { FullIntegrationTestConfiguration.class, BaseTestConfiguration.class })
public class RegularDayToTimeSlotsTest {
private static int NUMBER_OF_REGULAR_DAY_TO_TIME_SLOTS_IN_WEEK = 25;
#Before
public void setup() {
//org.hsqldb.util.DatabaseManagerSwing.main(new String[] { "--url", "jdbc:hsqldb:mem:bignibou", "--noexit" });
}
#Autowired
private AdvertisementService advertisementService;
#Test
public void shouldNotContainSaturdayNorSunday() {
Set<DayToTimeSlot> regularDayToTimeSlots = advertisementService.retrieveRegularDayToTimeSlots();
assertThat(regularDayToTimeSlots).onProperty("day").excludes(Day.SATURDAY, Day.SUNDAY);
assertThat(regularDayToTimeSlots).onProperty("day").contains(Day.MONDAY, Day.THUESDAY);
}
#Test
public void shouldNotContainEveningNorNighttime() {
Set<DayToTimeSlot> regularDayToTimeSlots = advertisementService.retrieveRegularDayToTimeSlots();
assertThat(regularDayToTimeSlots).onProperty("timeSlot").excludes(TimeSlot.EVENING, TimeSlot.NIGHTTIME);
assertThat(regularDayToTimeSlots).onProperty("timeSlot").contains(TimeSlot.MORNING, TimeSlot.LUNCHTIME);
}
#Test
public void shouldContainCorrectNumberOfDayToTimeSlots() {
Set<DayToTimeSlot> regularDayToTimeSlots = advertisementService.retrieveRegularDayToTimeSlots();
assertThat(regularDayToTimeSlots).hasSize(NUMBER_OF_REGULAR_DAY_TO_TIME_SLOTS_IN_WEEK);
}
}
I am puzzled to see that both the prepareRefresh() and finishRefresh() methods within AbstractApplicationContext.refresh method are indeed called but that my listener is not notified...
Has anyone got any clue?
P.S. I know I could use #DirtiesContext in order to get a fresh context and I also know it would be preferable not to rely on an application listener for my tests but I am very anxious to understand what is going wrong here. Hence this post.
edit 1: When I debug the problematic test class in isolation, I notice that the event source is of type GenericApplicationContext and as explained above the test passes OK because the listener is notified. However when all test classes are run together, the event source is, oddly enough, of type GenericWebApplicationContext and no listener is found here in SimpleApplicationEventMulticaster:
#Override
public void multicastEvent(final ApplicationEvent event) {
for (final ApplicationListener<?> listener : getApplicationListeners(event)) {
Executor executor = getTaskExecutor();
if (executor != null) {
executor.execute(new Runnable() {
#Override
public void run() {
invokeListener(listener, event);
}
});
}
else {
invokeListener(listener, event);
}
}
}
edit 2: my comments in edit 1 make me asks myself what is responsible for determining the uniqueness of context configuration...
For instance, I have only two test classes with the following context configuration:
#ContextConfiguration(classes = { FullIntegrationTestConfiguration.class, BaseTestConfiguration.class })
I guess they both will use the same cached context, won't they? Now can a third class use the same cached context even though it does not have exactly the same context configuration?
Why does my test get a GenericWebApplicationContext above?
my comments in edit 1 make me asks myself what is responsible for
determining the uniqueness of context configuration...
The elements that make up the context cache key are described in the Context caching section of the "Testing" chapter in the reference manual.
For instance, I have only two test classes with the following context
configuration:
#ContextConfiguration(classes = {
FullIntegrationTestConfiguration.class, BaseTestConfiguration.class })
I guess they both will use the same cached context, won't they?
If they declare only those two configuration classes in that exact order, then yes.
Now can a third class use the same cached context even though it does not
have exactly the same context configuration?
No.
Why does my test get a GenericWebApplicationContext above?
A GenericWebApplicationContext is only loaded if your test class (or one of its superclasses) is annotated with #WebAppConfiguration.
If you are experiencing behavior that contradicts this, then you have discovered a bug in which case we would appreciate it if you could produce a scaled down test project in the issue repository and create a corresponding JIRA issue against the "Spring Framework" and its "Test" component.
Thanks,
Sam (author of the Spring TestContext Framework)

MEF and AssemblyCatalog / AggregateCatalog

I have a simple console app as below (un-relevant code removed for simplicity)
[ImportMany(typeof(ILogger))]
public IEnumerable<ILogger> _loggers {get;set;}
public interface ILogger
{
void Write(string message);
}
[Export(typeof(ILogger))]
public class ConsoleLogger : ILogger
{
public void Write(string message)
{
Console.WriteLine(message);
}
}
[Export(typeof(ILogger))]
public class DebugLogger : ILogger
{
public void Write(string message)
{
Debug.Print(message);
}
}
The code that initialize the catalog is below
(1) var catalog = new AggregateCatalog();
(2) catalog.Catalogs.Add(new DirectoryCatalog(AppDomain.CurrentDomain.BaseDirectory));
(3) //var catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
var container = new CompositionContainer(catalog);
var batch = new CompositionBatch();
batch.AddPart(this);
container.Compose(batch);
If the catalog is initialized trough lines 1-2, nothing got loaded into _logger
If the catalog is initialized trough line 3, both logger got loaded into _logger
What's the issue with AggregateCatalog approach?
Thanks
It should work the way you are using it.
However, on line 2 you are creating a DirectoryCatalog, and on line 3 an AssemblyCatalog. Does it work as expected if you change line two into
catalog.Catalogs.Add(new AssemblyCatalog(Assembly.GetExecutingAssembly()));
I found the problem.
It seems DirectoryCatalog(path) search only in DLL by default, and my test program was a console application. And the exports were in EXE (not DLL) so they weren't loaded.
On the other hand, AssemblyCatalog(Assembly.GetExecutingAssembly()), obviously loaded exports from current assembly(which is the EXE).
The solution is to use the other constructor of DirectoryCatalog(path, searchPattern) , and use "*.*" for second param. And it works

Resources