It seems that Sammy.js catches the exceptions that happen within the .get callbacks, and it prints something like
[Fri Feb 01 2013 14:12:46 GMT+0000 (GMT)] body 500 Error get *error message* Error {}
so is there a way to get the full stack trace?
OK, raise_errors can be set to re-throw the errors, for example
app = Sammy(...);
app.raise_errors = true;
app.run();
Related
I am trying to send logs to DD (datadog) in such a way that the logs are being received as json and therefore shown properly in the portal through attributes.
My logger is a simple Logger.new(STDOUT, level: Logger::INFO).
If I stick to its standard output, it will in the form
I, [2022-07-30T22:43:35.216846 #1] INFO -- my-app: {"user":"1234"}
which is not really parsable by DD since not a proper JSON. In this case however all the logs appear at least on the DD portal.
Now.. I am trying to format the logs in a JSON manner in this way:
def self.logger
#logger ||= Logger.new(STDOUT, level: Logger::INFO)
#logger.progname = 'my-app'
#logger.formatter = proc do |severity, datetime, progname, msg|
{timestamp: datetime.to_s, progname: progname, severity: severity, correlation: Datadog::Tracing.log_correlation, message: msg}.to_json
end
#logger
end
This is my logger and thanks to this logs are seen properly in DD and parsed correctly because formatted in my app in a proper JSON.
The problem with this approach though seems to be that the logs are sent in 1 full block. Meaning that only the very first log is being visible. Let's say that I want to log this:
my_hash = {"message" => '1', "prop" => '1234'}.to_json
logger.info(my_hash)
my_hash = {"message" => '2', "prop" => '12345'}.to_json
logger.info(my_hash)
only the first log will be shown correctly on the DD portal. Parsed correctly with its message and prop attributes, but nothing about the second log.
Here is the thing, if I see the output of my app locally in the console I see this:
{"timestamp":"2022-07-31 01:15:39 +0200","progname":"my-app","severity":"INFO","correlation":"dd.service=my-app dd.trace_id=2976451780376429536 dd.span_id=0","message":"{"message":"1","prop":"1234"}"}{"timestamp":"2022-07-31 01:15:39 +0200","progname":"my-app","severity":"INFO","correlation":"dd.service=my-app dd.trace_id=2976451780376429536 dd.span_id=0","message":"{"message":"2","prop":"12345"}"}127.0.0.1 - - [31/Jul/2022:01:15:39 +0200] "GET /controller/test_controller HTTP/1.1" 200 - 0.0024
so the 2nd log gets actually outputted! But DD somehow sees only the first log..
(I know there is even a 3rd one shown in this message.. but that's just Sinatra automatic behavior for every http call reaching the api). What do you guys think is the problem?
I am very new to NativeScript (few hours) and I am trying to follow the tutorial on their site. When running the code at step 12 of the tutorial, the app is failing (when I submit the login form only) and crashes with the following error stack:
2018-10-10 20:35:06.321 nsplaydev[2295:419329] *** Terminating app due to uncaught exception 'NativeScript encountered a fatal error: TypeError: user.login is not a function. (In 'user.login()', 'user.login' is undefined)
at
1 signIn#file:///app/views/login/login-page.js:17:15
2 notify#file:///app/tns_modules/tns-core-modules/data/observable/observable.js:110:31
3 _emit#file:///app/tns_modules/tns-core-modules/data/observable/observable.js:127:24
4 tap#file:///app/tns_modules/tns-core-modules/ui/button/button.js:207:24
5 UIApplicationMain#[native code]
6 start#file:///app/tns_modules/tns-core-modules/application/application.js:272:26
7 run#file:///app/tns_modules/tns-core-modules/application/application.js:300:10
8 anonymous#file:///app/app.js:2:22
9 evaluate#[native code]
10 moduleEvaluation#[native code]
11 #[native code]
12 promiseReactionJob#[native code]
', reason: '(null)'
*** First throw call stack:
(0x211e5bf78 0x211054284 0x102e67e60 0x102e8d2e4 0x10378f088 0x1037901b4 0x21104f900 0x23f731a98 0x23f19be18 0x23f19c14c 0x23f19b0ec 0x23f76d208 0x23f76e468 0x23f74cb70 0x23f81d024 0x23f81fb50 0x23f81fec8 0x23f81854c 0x211de8a50 0x211de89cc 0x211de8284 0x211de2f64 0x211de2844 0x214091be8 0x23f73031c 0x103790044 0x10378e7a4 0x10378e26c 0x102e45630 0x103440e14 0x103449a24 0x103449a34 0x103449a34 0x103442ee0 0x1033dc198 0x1033b1e94 0x103546b9c 0x102e5a354 0x1035e2964 0x10344a494 0x103449a34 0x103449a34 0x103449a34 0x103442ee0 0x1033dc198 0x1033b1e94 0x103546c80 0x1035de8e0 0x102e51898 0x102e97f50 0x102ac8198 0x10257d3dc 0x211898020)
libc++abi.dylib: terminating with uncaught exception of type NSException
2018-10-10 20:35:06.321 nsplaydev[2295:419329] PlayLiveSync: Uncaught Exception
To learn the framework, I was purposefully typing each line manually to learn. Thought that could have introduced the error, so I went back and copy pasted their exact code. Still getting an issue.
Update: The link to the tutorial is here
Thanks
Based on the error log it looks like you haven't defined login function in your view model.
I'm using mocha with chai.assert for my tests. Errors are caught and reported, but they don't show a file/line number where they happen. I'm used to having location information with tests in other languages, it's otherwise hard to figure out which assert failed.
Is there some way to get location information with mocha/chai/assert?
From version 1.9.1 onwards, if you set the includeStack flag to true, you'll get a stack trace on assertion failures:
var chai = require("chai");
chai.config.includeStack = true;
var assert = chai.assert;
describe("test", function () {
it("blah", function () {
assert.isTrue(false);
});
});
In versions prior to 1.9.1 you had to set chai.Assertion.includeStack = true. From 1.9.1 onwards this method of getting stack traces is deprecated. It is still available in 1.10.0 but may be removed in 1.11.0 or 2.0.0. (See here for details.)
The example above will show a stack trace where assert.isTrue fails. Like this:
AssertionError: expected false to be true
at Assertion.<anonymous> (.../node_modules/chai/lib/chai/core/assertions.js:193:10)
at Assertion.Object.defineProperty.get (.../node_modules/chai/lib/chai/utils/addProperty.js:35:29)
at Function.assert.isTrue (.../node_modules/chai/lib/chai/interface/assert.js:242:31)
at Context.<anonymous> (.../test.js:7:16)
[... etc ...]
(I've truncated the trace to what is only relevant and truncated the paths.) The last frame shown in what I've included above is the one where the error happened (.../test.js:7:16). I do not think that chai allows having only the file name and line number of the assertion call.
chai.Assertion.includeStack is now deprecated. Use chai.config instead
var chai = require("chai");
chai.config.includeStack = true;
var assert = chai.assert;
How to make sense of the boost::mpi error code? For instance, what does error code 834983239 mean?
...
mpi::communicator world;
mpi::request req = world.isend(1, 1, std::string("hello"));
while(!req.test()) {
boost::this_thread::sleep(boost::posix_time::seconds(1));
}
int errorCode = req.test()->error();
...
The error code is unlikely to be filled in if there was not an error (and the default behavior for Boost.MPI is to throw an exception on error, not return a code). You should not need to check error codes manually unless you have changed Boost.MPI's default error handling settings.
I am trying to call cudppSort to sort a set of keys/values. I'm using the following code to set up the sort algorithm:
CUDPPConfiguration config;
config.op = CUDPP_ADD;
config.datatype = CUDPP_UINT;
config.algorithm = CUDPP_SORT_RADIX;
config.options = CUDPP_OPTION_KEY_VALUE_PAIRS | CUDPP_OPTION_FORWARD | CUDPP_OPTION_EXCLUSIVE;
CUDPPHandle planHandle;
CUDPPResult result = cudppPlan(&planHandle, config, number_points, 1, 0);
if (CUDPP_SUCCESS != result) {
printf("ERROR creating CUDPPPlan\n");
exit(-1);
}
The program exits, however on the line:
CUDPPResult result = cudppPlan(&planHandle, config, number_points, 1, 0);
and prints to stdout:
Cuda error: allocScanStorage in file 'c:/the/path/to/release1.1/cudpp/src/app/scan_app.cu' in line 279 : invalid configuration argument.
I looked at the line in scan_app.cu. It is,
CUT_CHECK_ERROR("allocScanStorage");
So apparently my configuration has an error that is causing the allocScanStorage to bomb out. There are only two calls to CUDA_SAFE_CALL in the function and I don't see a reason why either has anything to do with the configuration.
What is wrong with my configuration?
So that this doesn't sit around as an unanswered question (I'm not sure if this is the right SO etiquette but it seems like an answered question shouldn't sit around unanswered...), I'm copying the comment I made above here as an answer since it was the solution:
I figured this out (I'm still learning CUDA at the moment.) Because the error checking is asynchronous errors can show up in strange places if you don't check for them from time to time. My code had caused an error before I called cudppPlan but because I didn't check for errors the cudppPlan reported the error as if it was in cudppPlan.