Setting restrictedToMinimumLevel per sink in appSettings - appsettings

Not sure if this is supported, but I would like to set the 'restrictedToMinimumLevel' for my ColoredConsole sink via appSettings.
I am creating the Serilog global object as follows:
Log.Logger = new LoggerConfiguration()
.ReadAppSettings()
.CreateLogger();
I tried the following in my app.config:
<add key="serilog:write-to:ColoredConsole.restrictedToMinimumLevel" value="Information" />
It seems Serilog is trying to process but chokes on the string to Serilog.Events.LogLevel enum(?) conversion.
An unhandled exception of type 'System.InvalidCastException' occurred in mscorlib.dll
Additional information: Invalid cast from 'System.String' to 'Serilog.Events.LogEventLevel'.
Am I doing something wrong, or is this functionality not currently supported?
Thanks.

This should work, so in all likelihood you've found a bug. I've raised one on the Serilog issue tracker for it, if possible I'll get a fix out soon.

Related

How to deal with `app_engine_apis` warning when updating app.yaml from go114 to go115

I recently updated my app.yaml from
runtime: go114
to
runtime: go115
because I was warned in an email that support for go114 was ending.
The service deployed fine but after it finished, I got the message:
Updating service [default]...⠼WARNING: There is an indirect dependency on App Engine APIs, but they are not enabled in your app.yaml. You may see runtime errors trying to access these APIs. Set the app_engine_apis property.
So I added:
app_engine_apis: true
And now the service won't deploy and gives this error:
ERROR: (gcloud.app.deploy) An error occurred while parsing file: [<snip>/app.yaml]
Unexpected attribute 'app_engine_apis' for object of type AppInfoExternal.
Looks like a catch 22. How do I deal with this?
Posting this as community wiki as it's based on #Joel's comments.
It looks like this is being triggered, since those APIs aren't enabled yet in go115, you might get a runtime error.
I would say that you should probably reach out to Google Cloud either in their Issue Tracker system or open an Issue in this Github Page so that they can fix this issue, as there doesn't seem to be any workarounds for this one.

CreateDocumentCollectionQuery method in GraphBulkImport library error when using linq operations on the collection

I am using CosmosDB>BulkExecutor nuget in a .net Standard 2.0 project
using this nuget (latest pre-release version 2.4.1 preview) reference link for bulk executor which says that it has support for .net standard 2.0 however errors out when performing linq operation on the collection
“One or more errors occurred. (, Request URI: /, RequestStats: , SDK: Windows/10.0.18363 documentdb-netcore-sdk/2.4.0)”
Following is the code snippet :
var dburi = UriFactory.CreateDatabaseUri(databaseId);
var collection = client.CreateDocumentCollectionQuery(dburi);
var container = collection.Where(c => c.Id.Equals(containerId)).AsEnumerable().FirstOrDefault();
return container;
The error is thrown while performing linq operation where and First in line #3 above.
Following is a partial stack trace of the library:
at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at Microsoft.Azure.Documents.Linq.DocumentQuery`1.<GetEnumerator>d__31.MoveNext()
at System.Linq.Enumerable.TryGetFirst[TSource](IEnumerable`1 source, Boolean& found)
at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source)
This nuget documentation link is also confusing for Graph as it says if you are using “Bulk Executor” use the V3 version and looks like this V3 version does not have support for Graph API. So not sure how to use GraphBulkImport in cosmosdb.
The issue was in my SDK url configuration . It was incorrect.
After changing to the correct SDK url , I no longer get this error . Thanks to the people who responded.
There was a message saying InternalSErver error in the inner exception which made me recheck the url however a better message saying that there was a problem connecting to the source would have helped more
I think you may be using the wrong namespace in that package. For graph it should be Microsoft.Azure.CosmosDB.BulkExecutor.Graph.
There is an article here that walks through how to use Bulk Executor Library to do bulk graph import and should be helpful.

Esper AMQPSource not receiving events

I'm trying to use the AMQPSource and I'm getting the error describe below, also imports are not working in EPL module. In advance, I test adding full package name to DistanceEvent (events.DistanceEvent and does not work).
To sending a message i'm using the publish Rabbitmq webadmin queue option with the next payload:
{"distance":33}
Could anyone help me?
The "IO-error deserializing object" is the reason you are not seeing data.
The "AMQPToObjectCollectorSerializable" expects the AMQP message to carry a valid JVM-serialized object however the "invalid header" means that the message content cannot be read by the JVM. Check the sender making sure it produces a AMQP message with a JVM-serialized object or may use replace "AMQPToObjectCollectorSerializable" with a deserializer that can understand your message. The code for "AMQPToObjectCollectorSerializable" can be found in Github if your are not sure how it deserializes.

What is the GlobalIdentity and how do I set it in the FileNet web service?

I'm trying to upload a document into filenet via CEWS, but I'm getting this error:
“The unexpected exception is chained to this exception. Message was: com.filenet.apiimpl.core.GlobalIdentity incompatible with com.filenet.apiimpl.core.RepositoryIdentity“
Our Filenet people don't seem to know what that means. They've provided working code that basically looks the same as mine (but which I can't compile directly at the moment because it references parts of their project I don't have.)
So is the GlobalIdentity something I need to pass in through the web service? If so, how? If not, where is it configured?
Ok I finally spotted my mistake.
I had incorrectly set crt.TargetSpecification.classId to the name of the repository I was trying to use rather than to the correct classId.

spymemcached throws ClassNotFoundException from SerializingTranscoder for BasicClientCookie

I'm using spymemcached version 2.8.1 to read a cookie object but I keep running into the following exception:
app[web.1]: WARN net.spy.memcached.transcoders.SerializingTranscoder:
Caught CNFE decoding 513 bytes of data
app[web.1]: java.lang.ClassNotFoundException
org.apache.http.impl.cookie.BasicClientCookie
I am using httpclient version 4.1.1: https://dl.dropbox.com/u/6207935/Screen%20Shot%202013-02-05%20at%202.47.19%20PM.png
which has BasicClientCookie class inside of it so I'm not quite sure why it "cannot be found"
Also based on hear-say I think that BasicClientCookie is already marked as Serializable in 4.1.1 but the exact javadocs have been a bit difficult to dig up, honestly ... so its an assumption on my part. Anyway, the exception doesn't seem to be related to serialization but I thought I'd throw this out there for question completeness.
What would be some ideas to resolve this issue?
UPDATE # 1 (Feb 5, 2013)
These may shed some light on the problem:
http://code.google.com/p/spymemcached/issues/detail?id=146 - But when using Heroku I don't know how to obtain the same level of control over my app server's file system ... the way its described here.
http://code.google.com/p/spymemcached/issues/detail?id=155 - Not sure how to get spymemcached to use the custom SerializingTranscoder.
The advice given here worked: http://code.google.com/p/spymemcached/issues/detail?id=155#c2
The only additional bit was testing it out after applying the custom SerializingTranscoder to spymemcached
MemcachedClient mc =
new MemcachedClient(
new ConnectionFactoryBuilder()
.setTranscoder(new CustomSerializingTranscoder()) // makes it take effect
.setProtocol(ConnectionFactoryBuilder.Protocol.BINARY)
...

Resources