I've been wondering if there are any hidden / not well known features of couchdb?
We have had to debug map/reduce functions related to views and it is quite a pain to do so (no step by step debugging etc).
We have found links such as How to console log in couchdb but wondering if someone has found any more efficient ways and features.
There is no any of such features for debugging in CouchDB, except Log Driven Debugging approach. However, if you'll not limit yourself by only default CouchDB distribution, you may found useful to make mocked version of query server or interact with him directly like this ruby test case does or even switch to nodejs query server to debug views right in browser - there are many options as you see.
I have some pretty hefty views (1000+ rows of code). As I could not find a decent debugging framework. I stopped debugging in couchdb altogether.
My view docs have a common library view called (_lib). This is accessed form other views in the same doc.).
Initially I used a combination of the Kanape IDE and some emit's that were only triggered if I had set a debug flag.
Now I have moved the complete framework to Webstorm, where I debug using jasmine as well as profile using spy.js. It allowed me to find bottlenecks and have pretty fast views even across databases with a size of around 25 GByte.
Related
I'm working on a javascript profiler for Mozilla Firefox, that would let me obtain all available information about the execution of the script on the page (DOM object calls, events, calls to functions like Math.random(), document and navigator object calls, as well as code's own execution tree with arguments etc etc).
Currently, I think that the best way to implement this sort of profiler is by modifying Firefoxe's own source code.
One way to go about it is to find all implementations for corresponding method calls and add profiler log calls there. But there are 2 problems with this approach:
The methods and objects are widely scattered, and I'm not really familiar with the source code at this moment. Tracking down all the functions and making sure that the profiler works as intended will take A LOT of time
When created in this way, the profiler is going to be difficult to maintain when Firefox source code evolves with time.
So I was wondering, if there is a single class/a small group of key classes in firefox source, that could be modified to allow me to collect the information I want? Or is there a better way of doing what I need to do?
The latest Aurora release of Firefox has a basic profiler built into its developer tools, or you can download a more advanced interface from the Mozilla Add-ons site which works with Firefox 16 or later.
I'm going to develop a web application using SmartGWT. I've heard about Vaadin framework. I wonder what is the best to use?
My application will be used by ~500 users at the same time. And I need high response performance and high security control. I won't need dozen of pretty widget just enough to be able to use pretty tabbed pane and table. So what is the best choice regarding my needs?
Edit :
I'll also need a tool to export table content to Excel format (like in Google Doc SpreadSheet).
ps : already check this one Should I use Vaadin Framework
I looked into both these frameworks, and others, and decided to go with the core GWT widgets. You desire to have high response performance will be difficult with Vaadin since it sends almost everything back to the server. And if you don't need super fancy widgets then the core widgets (plus some incubator/3rd party ones as needed) should be fine. I didn't get deep into testing SmartGWT, but it seemed to really tie you into their framework (making it difficult to use core widgets as well) and I read about difficulties when starting to do things different than the showcase examples.
Good luck!
If you write your Vaadin application properly, it will be performing perfectly well (check this one: https://vaadin.com/wiki/-/wiki/Main/Optimizing%20Sluggish%20UI).
If you know SmartGWT, use it. Also, if you have no experience with Vaadin, use SmartGWT. It might take you a lot of time to learn Vaadin (it requires some practice after one is able to create well performing application). The biggest problem of Vaadin that it is very easy to write slow application - because everything seems to be so easy and one tends to use many components and so on...
Does anyone know of any tools out there that will let me run and debug a VXML application visually? There are a ton of VXML development tools, but they all require you to build your application within them.
I have an existing application that uses JSPs to generate VXML, and I'm looking for a way to navigate through and debug the rendered VXML in much the same way that Firebug allows one to do this with HTML. I have some proxy-like tools that let me inspect the rendered code as it is sent to the VXML browser, but there's a ton of JS, which makes traversing the code by hand rather difficult.
Has anyone worked with a product that allows for this?
Thanks!
IVR Avenger
There is JigSaw Test suite - has free trial license and reasonably priced.
There is IBM's debugger - part of WebSphere Voice Toolkit.
Many other products have debuggers - a very good summary is here
Disclaimer: I am the development manager for Voiyager (www.voiyager.com), a VoiceXML testing tool. It doesn't meet your criteria nor do I believe it is the type of tool you want, but I thought it was worth mentioning it.
As far as I know, there isn't such a test tool for VoiceXML. In fact there are very few VoiceXML tools on the market and hardly any of them test or analysis. The vendors that created development tools, have all been acquired by other companies. Some of them offered did offer various forms of debugging that were specific to their tool set or stayed at the Dialog (caller input) level. From your question, I'm assuming you need much lower level debugging capabilities.
I think the alternative paths are minimal and somewhat difficult. I believe your primary goal is to debug or rewrite an existing application, but you haven't provided any specific challenges beyond the JavaScript. Some thoughts or approaches that may help:
Isolate the JavaScript and place the code into a unit test harness. That will go a long way to understanding the logic of the application. Any encapsulation of the JavaScript you perform will probably go a long way towards better code maintainability.
Attempt to run the VoiceXML through a translation layer to HTML so you could use FireBug. The largest challenge would involve caller input (ie processing the SRGS grammars). You could probably cheat this by just having the form accept a JSON string the populates the field values. There are tools on the market to test grammars. Depending on the nature of your problems, you could take a simple and light approach and attempt this over just the trouble areas.
Plumb the application with a lot of logging. This can be done through the VoiceXML LOG element, or push the variable space back to the server. By adding intermediate forms, you may be able to provide a dump from each via the VoiceXML Data element.
See if your application will run in one of the open source VoiceXML browsers (not sure of the state of the open source browsers as we've built and bought for our various product lines). If you can get it mostly working, you can use the development debugger to provide some ability to step through the logic. However, it is probably one of the more difficult paths as you'll really need to understand the browser to know when and where to stick your breakpoints and to figure out how to expose the data you want.
Good luck on the challenge. If you find another approach, I would be interested in seeing it posted.
An alternative debug env is to use something like Asterisk with a voicexml browser plugin like the one from http://www.voiceglue.org/ or for a limited licence, i6net.
You can keep all the pieces separate(dynamic html and vxml application in php/jsp/j2ee/, tts processing, and optional asr processing as separate virtual machines with something like virtualbox. If the logic can be kept the same, then it is just a matter of changing the UI based on the channel.
A softphone is all you need to call a minimal asterisk machine, which has the voicexml browser with the url of the vxml in the call plan.
I just used Zend Framework as php is used in this environment, and changed view suffixes(phtml vs vxml) based on the user-agent string.
Flite for tts is fine for debugging, and when your app is ready you can either record phrases, and there was a page on the ubuntu forums with directions for how to increase flite quality with some additional sound files.
Do you have tried Eclipse VTP or InVision Studio?
Eclipse VTP
This is Eclipse plugin. But I feel that it is user-unfriendly a little (of Japanese viewpoint).
InVision Studio *Required create user account*
This is Convergys's IVR tool. It has to edit standard VXML mode. (Unfortunately, It's not exact matching.)
For just debugging vxml, I use Nuance Cafe's VoiceXML checker. It doesn't give you a visual tree or anything, but it's pretty good at spotting syntax errors and is free. I think they might also have more advanced debugging tools if you look into it, but I haven't had the need. (Note: I have no association with them)
http://cafe.bevocal.com/tools/vxmlchecker/vxmlchecker.jsp
I'm looking for the same problem that most of the links are down. I found a document where they propose an open source solution, which works as a plugin for Asterisk (https://www.researchgate.net/publication/228873959_Open_Source_VoiceXML_Interpreter_over_Asterisk_for_Use_in_IVR_Applications) and is available at https://sourceforge.net/projects/voxy/
I would like to know if there are current options to create a VXML structure graphically, like the next image.
When working in a big project that requires debugging (like every project) you realize how much people love "printf" before the IDE's built-in debugger. By this I mean
Sometimes you need to render variable values to screen (specially for interactive debugging).
Sometimes to log them in a file
Sometimes you have to change the visibility (make them public) just to another class to access it (a logger or a renderer for example).
Some other times you need to save the previous value in a member just to contrast it with the new during debugging
...
When a project gets huge with a lot of people working on it, all this debugging-specific code can get messy and hard to differentiate from normal code. This can be crazy for those who have to update/change someone else's code or to prepare it for a release.
How do you solve this?
It is always good to have naming standards and I guess that debug-coding standards should be quite useful (like marking every debug-variable with a _DBG sufix). But I also guess naming is just not enough. Maybe centralizing it into a friendly tracker class, or creating a robust base of macros in order to erase it all for the release. I don't know.
What design techniques, patterns and standards would you embrace if you are asked to write a debug-coding document for all others in the project to follow?
I am not talking about tools, libraries or IDE-specific commands, but for OO design decisions.
Thanks.
Don't commit debugging code, just debuggin tools.
Loggin OTOH has a natural place in execption handling routines and such. Also a few well placed logging statments in a few commonly used APIs can be good for debugging.
Like one log statment to log all SQL executed from the system.
My vote would be with what you described as a friendly tracker class. This class would keep all of that centralized, and potentially even allow you to change debug/logging strategies dynamically.
I would avoid things like Macros simply because that's a compiler trick, and not true OO. By abstracting the concept of debug/logging, you have the opportunity to do lots of things with it including making it a no-op if needed.
Logging or debugging? I believe that well-designed and properly unit-tested application should not need to be permanently instrumented for debugging. Logging on the other hand can be very useful, both in finding bugs and auditing program actions. Rather than cover a lot of information that you can get elsewhere, I would point you at logging.apache.org for either concrete implementations that you can use or a template for a reasonable design of a logging infrastructure.
I think it's particularly important to avoid using System.outs / printfs directly and instead use (even a custom) logging class. That at least gives you a centralized kill-switch for all the loggings (minus the call costs in Java).
It is also useful to have that log class have info/warn/error/caveat, etc.
I would be careful about error levels, user ids, metadata, etc. since people don't always add them.
Also, one of the most common problems that I've seen is that people put temporary printfs in the code as they debug something, and then forget where they put them. I use a tool that tracks everything that I do so I can quickly identify all my recent edits since an abstract checkpoint and delete them. In most cases, however, you may want to pose special rules on debug code that can be checked into your source control.
In VB6 you've got
Debug.Print
which sends output to a window in the IDE. It's bearable for small projects. VB6 also has
#If <some var set in the project properties>
'debugging code
#End If
I have a logging class which I declare at the top with
Dim Trc as Std.Traces
and use in various places (often inside #If/#End If blocks)
Trc.Tracing = True
Trc.Tracefile = "c:\temp\app.log"
Trc.Trace 'with no argument stores date stamp
Trc.Trace "Var=" & var
Yes it does get messy, and yes I wish there was a better way.
We routinely are beginning to use a static class that we write trace messages to. It is very basic and still requires a call from the executing method, but it serves our purpose.
In the .NET world, there is already a fair amount of built in trace information available, so we do not need to worry about which methods are called or what the execution time is. These are more for specific events which occur in the execution of the code.
If your language does not support, through its tracing constructs, categorization of messages, it should be something that you add to your tracing code. Something to the effect that will identify different levels of importance and/or functional areas is a great start.
Just avoid instrumenting your code by modifying it. Learn to use a debugger. Make logging and error handling easy. Have a look at Aspect Oriented Programming
Debugging/Logging code can indeed be intrusive. In our C++ projects, we wrap common debug/log code in macros - very much like asserts. I often find that logging is most usefull in the lower level components so it doesn't have to go everywhere.
There is a lot in the other answers to both agree and disagree with :) Having debug/logging code can be a very valuable tool for troubleshooting problems. In Windows, there are many techniques - the two major ones are:
Extensive use of checked (DBG) build asserts and lots of testing on DBG builds.
the use of ETW in what we call 'fre' or 'retail' builds.
Checked builds (what most ohter call DEBUG builds) are very helpfull for us as well. We run all our tests on both 'fre' and 'chk' builds (on x86 and AMD64 as well, all serever stuff runs on Itanium too...). Some people even self host (dogfood) on checked builds. This does two things
Finds lots of bugs that woldn't be found otherwise.
Quickly elimintes noisy or unnessary asserts.
In Windows, we use Event Tracing for Windows (ETW) extensively. ETW is an efficient static logging mechanism. The NT kernel and many components are very well instrumented. ETW has a lot of advantages:
Any ETW event provider can be dynamically enabled/disabled at run time - no rebooting or process restarts required. Most ETW providers provide granular control over individual events, or groups of events.
Events from any provider (most importantly the kernel) can be merged into a single trace so all events can be correlated.
A merged trace can be copied off the box and fully processed - with symbols.
The NT kernel sample pofile interrupt can generate an ETW event - this yeilds a very light weight sample profiler that can be used any time
On Vista and Windows Server 2008, logging an event is lock free and fully multi-core aware - a thread on each processor can independently log events with no synchronization needed between them.
This is hugly valuable for us, and can be for your Windows code as well - ETW is usuable by any component - including user mode, drivers and other kernel components.
One thing we often do is write a streaming ETW consumer. Instead of putting printfs in the code, I just put ETW events at intersting places. When my componetn is running, I can then just run my ETW watcher at any time - the watcher receivs the events and displays them, conts them, or does other interesting things with them.
I very much respectfully disagree with tvanfosson. Even the best code can benefit from well implemented logging. Well implimented static run-time logging can make finding many problems straight forward - without it, you have zero visiblilty into what is happening in your component. You can look at inputs, outputs and guess -that's about it.
They key here is the term 'well implimented'. Instrumentation must be in the right place. Like any thing else, this requries some thought and planning. If it is not in helpfull/intersting places, then it will not help you find problems in a a development, testing, or deployed scenario. You can also have too much instrumeation causing perf problems when it is on - or even off!
Of course, different software products or componetns will have different needs. Some things may need very little instrumenation. But a widely depolyed or critical component can greatly benefit from weill egineered instrumeantion.
Here is a scenario for you (note, this very well may not apply to you...:) ). Lets say you have a line-of-business app deployed on every desktop in your company - hundreds or thousands of users. What do you do when someone has a pbolem? Do yo stop by their office and hookup a debugger? If so, how do you know what version they have? Where do you get the right symbols? How do you get the debuger on their system? What if it only happens once every few hours or days? Are you going to let the system run with the debugger connected all that time?
As you can imagine - hooking up debugger in this scenario is disruptive.
If your component is instrumented with ETW, then you could ask your user to simply turn on tracing; continue to do his/her work; then hit the "WTF" button when the problem happens. Even better: your app may have be able to self log - detecting problems at run time and turning on logging auto-magicaly. It could even send you ETW files when problems occured.
These are just simple exmples - logging can be handled many different ways. My key recomendation here is to think about how loging might be able to help you find, debug, and fix problems in your componetns at dev time, test time, and after they are deployed.
I was burnt by the same issue in about every project I've been involved with, so now I have this habit that involves extensive use of logging libraries (whatever the language/platform provides) from the start. Any Log4X port is fine for me.
Building yourself some proper debug tools can be extremely valuable. For example in a 3D environment, you might have an option to display the octree, or to render planned AI paths, or to draw waypoints that are normally invisible. You'd probably also want some on-screen display to aid with profiling too: the current framerate, count of polygons on screen, texture memory usage, and so on.
Although this takes some time and effort to do, in the long run it can save you a lot of time and frustration.
I've been looking at ways people test their apps in order decide where to do caching or apply some extra engineering effort, and so far httperf and a simple sesslog have been quite helpful.
What tools and tricks did you apply on your projects?
I use httperf for a high level view of performance.
Rails has a performance script built in, that uses the ruby-prof gem to analyse calls deep within the Rails stack. There is an awesome Railscast on Request Profiling using this technique.
NewRelic have some seriously cool analysis tools that give near real-time data.
They just made it a "Lite" version available for free.
I use jmeter for session-based testing - it allows very fine-grained control over pages you want to hit, parameters to inject, loops to go through, etc. It's great for simulating how many real users your site can handle, rather than just performance testing a set of static urls. You can distribute tests over multiple machines quite easily by loading up the jmeter-server on computers with publicly accessible IP's. I have found some limitations in the number of users/threads any one machine can throw at a server at once (it depends on the test), but jmeter has helped my team improve our apps capacity for users to 6x.
It doesn't have any fancy graphing -- I actually use my own in-house graphing with gruff that can do performance analysis on request time for certain pages and actions.
I'm evaluating a new opensource web page instrumentation and measurement suite called Jiffy. It's not particularly for ruby, it works for all kind of webapps
There's also a Jiffy Firebug Extension for rendering the metrics inside the browser.
I also suggest you look at Browser Mob for load testing.
A colleague of mine has also posted some interesting thoughts on this.