I have been trying to use log4go in golang. But I could not find a proper example where log4go configuration properties were used like rotation,maxSize etc to create a logger. Can somebody provide a example? I have referred to many sites.
log4go is not well documented, I found some documentation in the original repository.
If you can, I'd use a different library like logrus, has a better documentation, examples and is actively developed.
The easy way is to use the logConfig xml, for example:
<code>
<logging>
<filter enabled="true">
<tag>stdout</tag>
<type>console</type>
<!-- level is (:?FINEST|FINE|DEBUG|TRACE|INFO|WARNING|ERROR) -->
<level>INFO</level>
</filter>
<filter enabled="true">
<tag>file</tag>
<type>file</type>
<level>INFO</level>
<property name="filename"><log file Path></property>
<!--
%T - Time (15:04:05 MST)
%t - Time (15:04)
%D - Date (2006/01/02)
%d - Date (01/02/06)
%L - Level (FNST, FINE, DEBG, TRAC, WARN, EROR, CRIT)
%S - Source
%M - Message
It ignores unknown format strings (and removes them)
Recommended: "[%D %T] [%L] (%S) %M"
-->
<property name="format">[%D %T] [%L] (%S) %M</property>
<property name="rotate">true</property> <!-- true enables log rotation, otherwise append -->
<property name="maxsize">10M</property> <!-- \d+[KMG]? Suffixes are in terms of 2**10 -->
<property name="maxlines">0K</property> <!-- \d+[KMG]? Suffixes are in terms of thousands -->
<property name="daily">true</property> <!-- Automatically rotates when a log message is written after midnight -->
<property name="maxbackup">10</property> <!-- Max backup for logs rotation -->
</filter>
</logging>
Personally, I preferred zarolog : https://github.com/rs/zerolog
Here is one example which can have two logs:
{
"console": {
"enable": true,
"level": "ERROR"
},
"files": [{
"enable": true,
"level": "DEBUG",
"filename":"./log/sys.log",
"category": "syslog",
"pattern": "[%D %T] [%L] (%S) %M",
"rotate": true,
"maxsize": "5M",
"maxlines": "10K",
"daily": true
},
{
"enable": true,
"level": "INFO",
"filename":"./log/market.log",
"category": "marketlog",
"pattern": "[%D %T] [%L] (%S) %M",
"rotate": false,
"maxsize": "10M",
"maxlines": "20K",
"daily": false
}
]
}
usage in code:
log4go.LOGGER("syslog").Info("...")
log4go.LOGGER("marketlog").Debug("...")
the debug calls on marketlog would not be written in this case because the "INFO" level automatically filters it out.
Related
I have ROS node and debug it in VS Code over launch task like this:
{
"name": "(gdb) Launch ROS node",
"type": "cppdbg",
"request": "launch",
"program": "${workspaceFolder}/build/catkin_ws/devel/lib/my_ros_node/my_ros_node",
"args": [
<some cmd-line arguments>
],
"cwd": "${workspaceFolder}/build/catkin_ws/devel/lib/my_ros_node/",
"environment": [{"name": "LD_LIBRARY_PATH", "value": "${workspaceFolder}/build/lib/:/opt/ros/melodic/lib"} ],
"externalConsole": false,
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
},
All works fine. But in one moment I have to remap topics. Easiest way is use roslaunch. I wrote launch-file for it:
<launch>
<remap from="topicA" to="topicB" />
<remap from="topicD" to="topicC" />
<node name="my_ros_node" pkg="my_ros_node" type="my_ros_node" args="<some args>"/>
</launch>
and I should correct VS Code launch target. I cannot find out how launch in VS Code launch target ROS node over roslaunch command. VS Code extension for ROS does not works fine probably case my ros node is just little part of my workspace.
I find out just one solution: launch ros node over roslaunch separately. and after this attach to proccess in VS Code. It works fine but ask for grant root access.
Is there a simpler solution than mine?
Checkout the vscode ROS extension (ms-iot.vscode-ros):
https://marketplace.visualstudio.com/items?itemName=ms-iot.vscode-ros
This adds among other things support for debug configs of
"type": "ros", "request": "launch" where you specify "target": "<path_to_your_launch_file>".
I am trying to use OSQuery in an environment with WEF/WEC and what I am trying to do is to collect all the Windows Events that are stored via subscriptions in the WEC servers.
My problem is that when I gather the windows events via OSQuery I do not seem to be able to get the field "Computer" which includes the hostname that actually generated the event.
Did somebody manage to get this working? Or is it an actual limitation of OSquery? When looking at the windows_events table schema (https://osquery.io/schema/4.5.1/#windows_events) it does not seem that the "Computer" field has been taken in account.
As an example, I have a WEC configured in a host named DESKTOP-JC2OUUQ and I have a subscription there for a laptop named DESKTOP-BEH0A7O. The eventlogs are flowing correctly towards WEC an i can receive them. Following is one of the events I am receiving:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event" xml:lang="en-US">
<System>
<Provider Name="Microsoft-Windows-Security-SPP" Guid="{E23B33B0-C8C9-472C-A5F9-F2BDFEA0F156}" EventSourceName="Software Protection Platform Service" />
<EventID Qualifiers="16384">16384</EventID>
<Version>0</Version>
<Level>4</Level>
<Task>0</Task>
<Opcode>0</Opcode>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2020-10-22T16:20:17.2647971Z" />
<EventRecordID>907</EventRecordID>
<Correlation />
<Execution ProcessID="0" ThreadID="0" />
<Channel>Application</Channel>
<Computer>DESKTOP-BEH0A7O</Computer>
<Security />
</System>
<EventData>
<Data>2020-12-18T12:30:17Z</Data>
<Data>RulesEngine</Data>
</EventData>
<RenderingInfo Culture="en-US">
<Message>Successfully scheduled Software Protection service for re-start at 2020-12-18T12:30:17Z. Reason: RulesEngine.</Message>
<Level>Information</Level>
<Task />
<Opcode />
<Channel />
<Provider>Microsoft-Windows-Security-SPP</Provider>
<Keywords>
<Keyword>Classic</Keyword>
</Keywords>
</RenderingInfo>
</Event>
When I try to collect this event with OSQuery, i get the following output:
{
"name": "windows_events_query",
"hostIdentifier": "DESKTOP-JC2OUUQ",
"calendarTime": "Thu Oct 22 16:26:14 2020 UTC",
"unixTime": 1603383974,
"epoch": 0,
"counter": 0,
"numerics": false,
"decorations": {
"host_uuid": "A7A0828C-1264-4E24-A67F-F5B69BE86165",
"username": "vagrant"
},
"columns": {
"data": "{\"EventData\":[\"2020-12-18T12:30:17Z\",\"RulesEngine\"]}",
"datetime": "2020-10-22T16:20:17.2647971Z",
"eventid": "16384",
"keywords": "0x80000000000000",
"level": "4",
"provider_guid": "{E23B33B0-C8C9-472C-A5F9-F2BDFEA0F156}",
"provider_name": "Microsoft-Windows-Security-SPP",
"source": "Application",
"task": "0",
"time": "1603383958"
},
"action": "added"
}
As you can see, among other fields I am not seeing the "Computer" tag which, to my knowledge is the only one containing the actual host who generated the event. Is there any way to get that value with OSQuery or is it a limitation?
Thanks!
Osquery did not support the Computer field. It does now:
https://github.com/osquery/osquery/pull/6952
Newbie to Microservices here.
I have been looking into develop a microservice with spring actuator while having Consul for service discovery and fail recovery.
I have configured a cluster as explained in Consul documentation.
Now what I'm trying to do is configure a Consul Watch to trigger when any of my service is down and execute a shell script to restart my service. Following is my configuration file.
{
"bind_addr": "127.0.0.1",
"datacenter": "dc1",
"encrypt": "EXz7LsrhpQ4idwqffiFoQ==",
"data_dir": "/data",
"log_level": "INFO",
"enable_syslog": true,
"enable_debug": true,
"enable_script_checks": true,
"ui":true,
"node_name": "SpringConsulClient",
"server": false,
"service": { "name": "Apache", "tags": ["HTTP"], "port": 8080,
"check": {"script": "curl localhost >/dev/null 2>&1", "interval": "10s"}},
"rejoin_after_leave": true,
"watches": [
{
"type": "service",
"handler": "/Consul-Script.sh"
}
]
}
Any help/tip would be greatly appreciate.
Regards,
Chrishan
Take a closer look at the description of the service watch type in the official documentation. It has an example, how you can specify it:
{
"type": "service",
"service": "redis",
"args": ["/usr/bin/my-service-handler.sh", "-redis"]
}
Note that it has no property handler and but takes a path to the script as an argument. And one more:
It requires the "service" parameter
It seems, in you case you need to specify it as follows:
"watches": [
{
"type": "service",
"service": "Apache",
"args": ["/fully/qualified/path/to/Consul-Script.sh"]
}
]
I'm trying to use the VS Code Chrome debugger to debug Angular2 (2.0.0-beta.9) & Typescript (v1.8.7). I'm setting the break point in the ts file but the debugger is displaying the js. The debugger does show the ts when the whole application is in one folder, but doesn't behave correctly when the application is composed of subfolders. At first I thought it wasn't able to resolve the mapping but I have diagnostics turned on and can see that the paths are being properly resolved.
Here's an example from the diagnostic window:
›Paths.scriptParsed: resolved http://localhost:3000/bin/hero/hero.service.js to c:\MyDev\ng2\bin\hero\hero.service.js. webRoot: c:\MyDev\ng2
›SourceMaps.createSourceMap: Reading local sourcemap file from c:\MyDev\ng2\bin\hero\hero.service.js.map
›SourceMap: creating SM for c:\MyDev\ng2\bin\app.component.js
›SourceMap: no sourceRoot specified, using script dirname: c:\MyDev\ng2\bin
›SourceMaps.scriptParsed: c:\MyDev\ng2\bin\app.component.js was just loaded and has mapped sources: ["c:\\MyDev\\ng2\\app\\app.component.ts"]
›SourceMaps.scriptParsed: Resolving pending breakpoints for c:\MyDev\ng2\app\app.component.ts
tsconfig.json:
{
"compilerOptions": {
"target": "es5",
"module": "system",
"moduleResolution": "node",
"sourceMap": true,
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"removeComments": false,
"noImplicitAny": false,
"outDir": "bin"
},
"exclude": [
"node_modules",
"typings"
]
}
The section from launch.json:
{
"name": "Launch localhost with sourcemaps",
"type": "chrome",
"request": "launch",
"url": "http://localhost:3000/index.html",
"sourceMaps": true,
"webRoot": "${workspaceRoot}",
"diagnosticLogging": true
}
Unfortunately, the correct mapping of your source code to the Webpack file has changed a few times.
You already have diagnosticLogging turned on in your launch.json, which means you should have lines like these in your JavaScript console:
SourceMap: mapping webpack:///./src/main.ts => C:\Whatever\The\Path\main.ts
This should give you a clear idea of where it is trying to search for your source code.
Then you add a sourceMapPathOverrides entry to the launch.json to help it find your files. It should look something like this:
"sourceMapPathOverrides": {
"webpack:///./*": "${workspaceRoot}/SourceFolder/*"
},
Obviously, replacing SourceFolder with the actual path.
Edit:
In 2019, this is still valid, but how you enable it has changed. diagnosticLogging has been replaced by trace, which has exactly one valid value, namely trace.
So your setup will look like this:
{
"name": "Launch localhost with sourcemaps",
"type": "chrome",
"request": "launch",
"url": "http://localhost:3000/index.html",
"sourceMaps": true,
"webRoot": "${workspaceRoot}",
"trace": "verbose"
}
This will give you lots of output, still including rows starting with SourceMap: mapping, which you can use to build the correct set of sourceMapPathOverrides as described before.
I've created simple restartless Firefox add-on and trying to localize it. I can not localize the add-on name and description. I'm trying to do it as it is descriped here Localizing extension descriptions
Below my install.rdf file and package.json
package.json
{
"name": "find_in_files",
"title": "Find in files",
"id": "{7DE613B7-54D9-4899-A018-861472402B2E}",
"description": "Search for substring in files",
"author": "Vitaly Shulgin",
"license": "MPL 2.0",
"version": "1.1",
"unpack": "true",
"preferences": [
{
"name": "SearchDirectory",
"title": "Search directory",
"description": "You must specify it before search. Please, be patient - it may takes some time to index documents before search will return correct result.",
"type": "directory",
"value": ""
},
{
"name": "DefaultLocale",
"title": "Default language",
"description": "Default language to use when searching in non-unicode documents",
"type": "menulist",
"value": "ru-ru",
"options": [
{
"value": "en-us",
"label": "English"
},
{
"value": "ru-ru",
"label": "Russian"
}
]
},
{
"name": "OutputFileName",
"title": "Temporary output file name",
"description": "Temporary output file name",
"type": "string",
"value": "fif-result.html",
"hidden": true
}
]
}
install.rdf
<?xml version="1.0" encoding="utf-8" ?>
<RDF xmlns="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:em="http://www.mozilla.org/2004/em-rdf#">
<Description about="urn:mozilla:install-manifest">
<em:id>{7DE613B7-54D9-4899-A018-861472402B2E}</em:id>
<!-- begin localizaation -->
<em:localized>
<Description>
<em:locale>ru-Ru</em:locale>
<em:name>Поиск в файлах</em:name>
<em:description>Поиск выделенного текста в файлах</em:description>
</Description>
</em:localized>
<em:localized>
<Description>
<em:locale>en-Us</em:locale>
<em:name>Find in Files</em:name>
<em:description>Search for selected text in files</em:description>
</Description>
</em:localized>
<!-- em:name>Find in files</em:name -->
<!-- em:description>Search for selected text in files</em:description -->
<!-- end localizaation -->
<em:version>1.1</em:version>
<em:type>2</em:type>
<em:targetApplication>
<Description>
<em:id>{ec8030f7-c20a-464f-9b0e-13a3a9e97384}</em:id> <!--Firefox-->
<em:minVersion>1.5</em:minVersion>
<em:maxVersion>3.0.*</em:maxVersion>
</Description>
</em:targetApplication>
<em:unpack>true</em:unpack>
<em:creator>Vitaly A. Shulgin</em:creator>
<em:targetPlatform>WINNT</em:targetPlatform>
</Description>
</RDF>
What am I doing wrong?
The answer is - the command "cfx xpi" from mozilla add-on sdk will overwrite install.rdf if you have it in project folder. So, to get things work properly - create xpi package, unpack it (unzip) - and you will find auto-generate install.rdf inside (!!!), substitute install.rdf with your own and re-pack xpi with zip command.
That's all, folks!