why janusgraph config factory did not work? - janusgraph

anyone can help me?
when i config janusgraoh factory as instruction,
but i can ConfiguredGraphFactory.createConfiguration(new MapConfiguration(map));
i can not open graph:ConfiguredGraphFactory.open("graph1")
this is gremlin.yaml
host: 0.0.0.0
port: 8183
scriptEvaluationTimeout: 10000000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer
graphManager: org.janusgraph.graphdb.management.JanusGraphManager
graphs: {
ConfigurationManagementGraph: conf/janusgraph-cql-configurationgraph.properties
this is janusproperties:
gremlin.graph=org.janusgraph.core.ConfiguredGraphFactory
graph.graphname=ConfigurationManagementGraph
storage.backend=cql
index.ES.backend=elasticsearch
index.ES.hostname=127.0.0.1
curl localhost:8183 -d '{"gremlin":"ConfiguredGraphFactory.open(/graph1/).vertices()"}' | python -m json.tool
"stackTrace": "com.datastax.driver.core.exceptions.SyntaxError: line 2:33 : syntax
error...\n\ n\ tat
com.datastax.driver.core.Responses$Error.asException(Responses.java: 143)\ n\ tat
com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java: 179)\ n\ tat
com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java: 196)\ n\ tat
com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java: 50)\ n\ tat com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java: 827)\ n\ tat
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java: 661)\ n\ tat com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java: 1083)\ n\ tat com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java: 1006)\ n\ tat
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java: 105)\ n\ tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 356)\ n\ tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 342)\ n\ tat
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java: 335)\ n\ tat io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java: 286)\ n\ tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 356)\ n\ tat
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 342)\ n\ tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java: 335)\ n\ tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java: 102)\ n\ tat
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 356)\ n\ tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 342)\ n\ tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java: 335)\ n\ tat
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java: 312)\ n\ tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java: 286)\ n\ tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 356)\ n\ tat
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 342)\ n\ tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java: 335)\ n\ tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java: 1304)\ n\ tat
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 356)\ n\ tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java: 342)\ n\ tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java: 921)\ n\ tat
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java: 725)\ n\ tat io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java: 400)\ n\ tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java: 300)\ n\ tat
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java: 131)\ n\ tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java: 30)\ n\ tat java.lang.Thread.run(Thread.java: 748)\ n " }

Your configuration file for the server is not a valid YAML file as it's incomplete. It should look something like this:
# Copyright 2019 JanusGraph Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
host: 0.0.0.0
port: 8182
scriptEvaluationTimeout: 30000
channelizer: org.janusgraph.channelizers.JanusGraphWebSocketChannelizer
graphManager: org.janusgraph.graphdb.management.JanusGraphManager
graphs: {
ConfigurationManagementGraph: conf/janusgraph-cql-configurationgraph.properties
}
scriptEngines: {
gremlin-groovy: {
plugins: { org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
org.apache.tinkerpop.gremlin.tinkergraph.jsr223.TinkerGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: {classImports: [java.lang.Math], methodImports: [java.lang.Math#*]},
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {files: []}}}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
# Older serialization versions for backwards compatibility:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
- { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}}
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536

Related

SAM AWS Lambda Python: gateway api - handle binary form upload

SAM AWS Lambda Python: I create a function that receive a file with multipart/form-data
but when I send the following request
curl --location --request POST 'https://....execute-api.....amazonaws.com/Prod/ocrReceipt' \
--header 'Connection: keep-alive' \
--header 'Pragma: no-cache' \
--header 'Cache-Control: no-cache' \
--header 'Accept: application/json, text/plain, */*' \
--header 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36' \
--header 'Accept-Language: en-US,en;q=0.9,he;q=0.8' \
--form 'file=#"/D:/receipt.png"'
I get the event with missing Content-Length and "isBase64Encoded": false
and my code fail because I do fp = io.BytesIO(base64.b64decode(body)) # decode
when I access the same function with the Function URL I don't get this error (i.e. Content-Length is set and "isBase64Encoded": true
How can I config the gateway api to accept multipart/form-data properly?
I tried to set multipart/form-data and multipart/* in API > Settings > Binary Media Types
but it doesn't help
Not that is says You can configure binary support for your API by specifying which media types should be treated as binary types. API Gateway will look at the Content-Type and Accept HTTP headers to decide how to handle the body.
Did I set the correct value?
Can I fix it by some setting in the SAM template?
Event:
{"resource": "/ocrReceipt", "path": "/ocrReceipt", "httpMethod": "POST", "headers": {"Accept": "application/json, text/plain, */*", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-US,en;q=0.9,he;q=0.8", "Cache-Control": "no-cache", "CloudFront-Forwarded-Proto": "https", "CloudFront-Is-Desktop-Viewer": "true", "CloudFront-Is-Mobile-Viewer": "false", "CloudFront-Is-SmartTV-Viewer": "false", "CloudFront-Is-Tablet-Viewer": "false", "CloudFront-Viewer-ASN": "1680", "CloudFront-Viewer-Country": "IL", "Content-Type": "multipart/form-data; boundary=--------------------------592233465752962090703619", "Host": "6y5o0gb78g.execute-api.eu-west-3.amazonaws.com", "Origin": "https://doc2txt.com", "Postman-Token": "f39dea09-b71f-47bb-aa8a-57783df2ddf1", "Pragma": "no-cache", "Referer": "https://doc2txt.com/", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36", "Via": "1.1 759e09affff41285e9585e1a31532bd4.cloudfront.net (CloudFront)", "X-Amz-Cf-Id": "5jCh7W91ayJWUf_75DfWr-IfCj2CvCobvTlshBjoqaElhVWsNmlFuA==", "X-Amzn-Trace-Id": "Root=1-63b7df99-5a343d1a16ca5cde72a47d66", "X-Forwarded-For": "176.12.157.166, 130.176.1.83", "X-Forwarded-Port": "443", "X-Forwarded-Proto": "https"}, "multiValueHeaders": {"Accept": ["application/json, text/plain, */*"], "Accept-Encoding": ["gzip, deflate, br"], "Accept-Language": ["en-US,en;q=0.9,he;q=0.8"], "Cache-Control": ["no-cache"], "CloudFront-Forwarded-Proto": ["https"], "CloudFront-Is-Desktop-Viewer": ["true"], "CloudFront-Is-Mobile-Viewer": ["false"], "CloudFront-Is-SmartTV-Viewer": ["false"], "CloudFront-Is-Tablet-Viewer": ["false"], "CloudFront-Viewer-ASN": ["1680"], "CloudFront-Viewer-Country": ["IL"], "Content-Type": ["multipart/form-data; boundary=--------------------------592233465752962090703619"], "Host": ["6y5o0gb78g.execute-api.eu-west-3.amazonaws.com"], "Origin": ["https://doc2txt.com"], "Postman-Token": ["f39dea09-b71f-47bb-aa8a-57783df2ddf1"], "Pragma": ["no-cache"], "Referer": ["https://doc2txt.com/"], "User-Agent": ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36"], "Via": ["1.1 759e09affff41285e9585e1a31532bd4.cloudfront.net (CloudFront)"], "X-Amz-Cf-Id": ["5jCh7W91ayJWUf_75DfWr-IfCj2CvCobvTlshBjoqaElhVWsNmlFuA=="], "X-Amzn-Trace-Id": ["Root=1-63b7df99-5a343d1a16ca5cde72a47d66"], "X-Forwarded-For": ["176.12.157.166, 130.176.1.83"], "X-Forwarded-Port": ["443"], "X-Forwarded-Proto": ["https"]}, "queryStringParameters": null, "multiValueQueryStringParameters": null, "pathParameters": null, "stageVariables": null, "requestContext": {"resourceId": "7g6rs7", "resourcePath": "/ocrReceipt", "httpMethod": "POST", "extendedRequestId": "eT_iDFWXCGYFfyQ=", "requestTime": "06/Jan/2023:08:45:26 +0000", "path": "/Prod/ocrReceipt", "accountId": "899418482974", "protocol": "HTTP/1.1", "stage": "Prod", "domainPrefix": "6y5o0gb78g", "requestTimeEpoch": 1672994726594, "requestId": "8a23ce92-11fe-4e54-bbb0-5ad355130833", "identity": {"cognitoIdentityPoolId": null, "accountId": null, "cognitoIdentityId": null, "caller": null, "sourceIp": "176.12.157.166", "principalOrgId": null, "accessKey": null, "cognitoAuthenticationType": null, "cognitoAuthenticationProvider": null, "userArn": null, "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36", "user": null}, "domainName": "6y5o0gb78g.execute-api.eu-west-3.amazonaws.com", "apiId": "6y5o0gb78g"}, "body": "...", "isBase64Encoded": false}
template.yaml
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: >
python3.8
Sample SAM Template for ocrSam
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
MemorySize: 128
Resources:
Doc2txtFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
Timeout: 60
Role: arn:aws:iam::899418482974:role/service-role/ocrReceipt-role-v2edsg0i
PackageType: Image
Architectures:
- x86_64
Events:
Doc2txt:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /ocrReceipt
Method: ANY
Metadata:
Dockerfile: Dockerfile
DockerContext: ./doc2txt
DockerTag: python3.8-v1
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
Doc2txtApi:
Description: "API Gateway endpoint URL for Prod stage for Doc2txt function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/ocrReceipt/"
Doc2txtFunction:
Description: "Doc2txt Lambda Function ARN"
Value: !GetAtt Doc2txtFunction.Arn
Doc2txtFunctionIamRole:
Description: "Implicit IAM Role created for Doc2txt function"
Value: arn:aws:iam::899418482974:role/service-role/ocrReceipt-role-v2edsg0i #!GetAtt Doc2txtFunctionRole.Arn
EDIT:
I was able to get "isBase64Encoded": true with the following:
Globals:
Api:
BinaryMediaTypes:
- "*/*"
I was able to get "isBase64Encoded": true with the following:
Globals:
Api:
BinaryMediaTypes:
- "*/*"
apparently the content-lenght is not necessary, so I just remove it from the code:
headers = {
"content-type": headers["Content-Type"],
# "content-length": headers["Content-Length"],
}
fs = cgi.FieldStorage(fp=fp, environ=environ, headers=headers)

creating dynamic index from kafka-filebeat

software version: ES-OSS-7.4.2, filebeat-OSS-7.4.2
following is my filebeat.yml and grok pipeline
filebeat.inputs:
- type: kafka
hosts:
- test-bigdata-kafka0003:9092
- test-bigdata-kafka0002:9092
- test-bigdata-kafka0001:9092
topics: ["bigdata-k8s-test-serverlog"]
group_id: "filebeat-kafka-test"
setup.template.settings:
index.number_of_shards: 1
_source.enabled: true
setup.template.name: "test"
setup.template.pattern: "test-*"
setup.template.overwrite: true
setup.template.enabled: true
setup.ilm.enable: true
setup.ilm.rollover_alias: "test"
setup.kibana:
host: "https://xxx:8080"
username: "superuser"
password: "123456"
ssl.verification_mode: none
output.elasticsearch:
index: "test-%{[jiserver]}-%{+yyyy.MM.dd}"
pipeline: "test-pipeline"
hosts: ["xxx:8200"]
username: "superuser"
password: "123456"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
pipeline.json
{
"description": "Test pipeline",
"processors": [
{
"grok": {
"field": "message",
"patterns": ["%{CUSTOMTIME:timestamp} (?:%{NOTSPACE:jiserver}|-) (?:%{NOTSPACE:hostname}|-) (?:%{LOGLEVEL:level}|-) (?:%{NOTSPACE:thread}|-) (?:%{NOTSPACE:class}|-) (?:%{NOTSPACE:method}|-) (?:%{NOTSPACE:line}|-) (?:%{CUSTOMDATA:message}|-)"],
"pattern_definitions": {
"CUSTOMTIME": "%{YEAR}[- ]%{MONTHNUM}[- ]%{MONTHDAY}[- ]%{TIME}",
"CUSTOMDATA": "((%{GREEDYDATA})[[:space:]]?)+"
}
}
}
],
"on_failure": [
{
"set": {
"field": "error_information",
"value": "Processor {{ _ingest.on_failure_processor_type }} with tag {{ _ingest.on_failure_processor_tag }} in pipeline {{ _ingest.on_failure_pipeline }} failed with message {{ _ingest.on_failure_message }}"
}
}
]
}
I use grok split the message to different field, one of them is jiserver . And I want my index dynamicly name with jiserver, how to do . above setting is not work, and recevie error
[elasticsearch] elasticsearch/client.go:541 Bulk item insert failed (i=0, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}
I found a solution 。 filebeat.yml add a script processor
processors:
- script:
lang: javascript
id: my_filter
source: >
function process(event) {
var message = event.Get("message");
var name = message.split(" ")
event.Put("jiserver", name[2])
}

Spring avro consumer message converter exception

I have a spring cloud application. Here are consumer application properties
cloud:
stream:
default:
producer:
useNativeEncoding: true
consumer:
useNativeEncoding: true
bindings:
inputtest:
destination: test
content-type: application/*+avro
outputtest:
destination: test
content-type: application/*+avro
kafka:
streams:
binder:
configuration:
application:
server: localhost:8082
binder:
producer-properties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: http://localhost:8081
consumer-properties:
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: http://localhost:8081
specific.avro.reader: true
schema:
avro:
dynamicSchemaGenerationEnabled: true
And also StreamListener is also configured
public void consumeDetails(GenericRecord message) {
System.out.println(message);
}
Its able to get the Genericrecord But when i put any Java class at the place of generic record its throwing an exception
#StreamListener(CreateMessageSink.INPUT)
public void consumeDetails(**TestMessage** message) {
System.out.println(message);
}
And exception is that i am getting on receiving message is
org.springframework.messaging.converter.MessageConversionException: Cannot convert from [com.dataset.CreateMessage] to [com.notebook..TestMessge] for GenericMessage [payload={ "time": 1570614318582, "task": "Create", "userId": "-1", "status": "Success", "severity": "INFO", "details": {"notebookId": "1", "datasetId": "1"}}, headers={kafka_offset=59, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#745a009e, deliveryAttempt=3, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=dataset-create-test, kafka_receivedTimestamp=1570614318587, contentType=application/*+avro}], failedMessage=GenericMessage [payload={"tenantId": "-1", "time": 1570614318582, "task": "CreateDataset", "userId": "-1", "status": "Success", "source": "dataset-svc", "severity": "INFO", "details": {"notebookId": "d02fd508-f6cc-4d2f-a713-4298bca4e216", "datasetId": "5d9dac2eb7420146d6a79d30"}}, headers={kafka_offset=59, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#745a009e, deliveryAttempt=3, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=dataset-create-test, kafka_receivedTimestamp=1570614318587, contentType=application/*+avro}]
at org.springframework.cloud.stream.config.SmartPayloadArgumentResolver.resolveArgument(SmartPayloadArgumentResolver.java:126)
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:117)

How to use log4go with configuration file?

I have been trying to use log4go in golang. But I could not find a proper example where log4go configuration properties were used like rotation,maxSize etc to create a logger. Can somebody provide a example? I have referred to many sites.
log4go is not well documented, I found some documentation in the original repository.
If you can, I'd use a different library like logrus, has a better documentation, examples and is actively developed.
The easy way is to use the logConfig xml, for example:
<code>
<logging>
<filter enabled="true">
<tag>stdout</tag>
<type>console</type>
<!-- level is (:?FINEST|FINE|DEBUG|TRACE|INFO|WARNING|ERROR) -->
<level>INFO</level>
</filter>
<filter enabled="true">
<tag>file</tag>
<type>file</type>
<level>INFO</level>
<property name="filename"><log file Path></property>
<!--
%T - Time (15:04:05 MST)
%t - Time (15:04)
%D - Date (2006/01/02)
%d - Date (01/02/06)
%L - Level (FNST, FINE, DEBG, TRAC, WARN, EROR, CRIT)
%S - Source
%M - Message
It ignores unknown format strings (and removes them)
Recommended: "[%D %T] [%L] (%S) %M"
-->
<property name="format">[%D %T] [%L] (%S) %M</property>
<property name="rotate">true</property> <!-- true enables log rotation, otherwise append -->
<property name="maxsize">10M</property> <!-- \d+[KMG]? Suffixes are in terms of 2**10 -->
<property name="maxlines">0K</property> <!-- \d+[KMG]? Suffixes are in terms of thousands -->
<property name="daily">true</property> <!-- Automatically rotates when a log message is written after midnight -->
<property name="maxbackup">10</property> <!-- Max backup for logs rotation -->
</filter>
</logging>
Personally, I preferred zarolog : https://github.com/rs/zerolog
Here is one example which can have two logs:
{
"console": {
"enable": true,
"level": "ERROR"
},
"files": [{
"enable": true,
"level": "DEBUG",
"filename":"./log/sys.log",
"category": "syslog",
"pattern": "[%D %T] [%L] (%S) %M",
"rotate": true,
"maxsize": "5M",
"maxlines": "10K",
"daily": true
},
{
"enable": true,
"level": "INFO",
"filename":"./log/market.log",
"category": "marketlog",
"pattern": "[%D %T] [%L] (%S) %M",
"rotate": false,
"maxsize": "10M",
"maxlines": "20K",
"daily": false
}
]
}
usage in code:
log4go.LOGGER("syslog").Info("...")
log4go.LOGGER("marketlog").Debug("...")
the debug calls on marketlog would not be written in this case because the "INFO" level automatically filters it out.

nginx webpy fastcgi caching can not work

nginx.conf file content as follow:
http {
include mime.types;
default_type application/octet-stream;
# configure cache log
log_format cache '$remote_addr - $host [$time_local] '
'"$request" $status $upstream_cache_status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
fastcgi_cache_path /data0/nginx-cache levels=1:2
keys_zone=nginx_fastcgi_cache:1m
inactive=1d;
fastcgi_temp_path /data0/nginx-cache/temp;
server {
listen 8080;
server_name outofmemory.cn localhost;
access_log /data0/nginx-1.2.6/logs/cache.log cache;
#charset koi8-r;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache nginx_fastcgi_cache;
fastcgi_cache_min_uses 1;
fastcgi_ignore_headers Cache-Control Expires;
fastcgi_cache_use_stale error timeout invalid_header http_500;
#add_header X-Cache cached;
fastcgi_cache_valid 60m;
location / {
root /www/outofmemory.cn;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param SCRIPT_FILENAME $fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_pass 127.0.0.1:9002;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache nginx_fastcgi_cache;
fastcgi_cache_valid 60m;
}
}
}
any help would be appreciated.
thanks.
I also having the same problem. As yukaizhao mentioned in his post, need to added below to ignore expires header or else fastcgi_cache won't work.
fastcgi_ignore_headers "Cache-Control" "Expires" "Set-Cookie";
Thanks yukaizhao!
I have resolved this question.
I wrote an artile to explain how to configure nginx + webpy + fastcgi cache.
http://outofmemory.cn/code-snippet/2154/nginx-webpy-fastcgi-cache-configuration-explain-in-detail
thanks.

Resources