I'm currently experimentig with FoundationDB in a .Net WebApi 2 project. The WebApi controller performs a simple getrange against the foundationdb cluster, and everything works fine ... if I run the project just once.
The second time I run it, I get the dreaded api_version_already_set error, and the only way to have everything up and running again is to restart IIS. I've found this similar question, and the only "solution" proposed in the answer is to run a process per App Domain, that isn't really ideal.
I have also tried this hack used in the .Net library, but all it does is switch the api_version_already_set error to network_already_setup or broken_promise.
Has anybody else found a better solution?
PS: To temporarly solve this, I'm running the WebApi as self host, and this seems to solve the problem, but makes the use of FoundationDB in conjunction with WebApi annoying outside of a test environment.
This issue is still present in version 5.x, for the same reason. The network thread can only be created (and shut down) once per process, so any host that use multiple Application Domain per process will not work. There does not seem to be any incentive to solve this issue (that mostly only impact managed platforms like .NET, maybe Java?).
Fortunately, with ASP.NET Core and web hosts like Kestrel (out of process, does not use AppDomains), this issue will become moot.
This can still cause issues with unit test runners that attempt to cache the process between runs. You need to disable this caching feature for them to run reliably.
Related
I haven't been able to find an easy way to config Jenkins with a Cakephp project on my localhost to implement Continuos Integration properly.
Would be appreciated if someone supply an easy to understand tutorial, from configuring Jenkins to run Cakephp test units.
Thanks
As said by #xgalvin getting all the dependencies running on Windows is a messy and error prone task. You'd be better of with what he suggested or an Linux based server - be it a virtual machine or not. Anyway definitely use the Jenkins PHP template. I've personally done this a couple of times and the first time it was a bit of a hassle but it is not very hard to do. All you need is basic knowledge of Linux/bash/PHP/Jenkins and some time.
I'm working with WWW::Mechanize to slurp a products catalog from a web site into our database (Ingram Micro). Everything is over SSL.
I'm receiving a random error like the following:
Protocol scheme 'https' is not supported (LWP::Protocol::https not installed)
...but, LWP::Protocol::https is installed. In fact, everything works fine most of the time. The only thing I can think of is that this has something to do with using threads on Windows (the process splits the job to 25 threads to compensate for the long time Ingram's website take to deliver each page). I haven't seen the error (so far) when I use a single thread.
The error doesn't happen everytime and generally only happens for one thread, the rest can work without receiving it.
However, this is really wierd. I'd like to know if anyone here has seen something like this before or if someone has any idea of why this might happen.
Thanks,
Francisco
Edit: Just in case someone wonders, I'm on Windows 7 x64 and Perl 5.16.3 x64 built with MSVC10.
It is likely a problem with a module not being thread safe. See this Perlmonks discussion, also about LWP and https.
The thread (er...discussion) also offers some potential solutions.
The solution I use is cloning Mechanize object at the start of each thread and working with the cloned version. But as I said I'm using WWW::Mechanize not plain LWP.
$mech = $mech->clone();
As I am new to web2py, I wonder what are the ways available for debugging a web2py application. So far, I've come across the following scenarios:
when a runtime error occurs in a web2py app, an error ticket is generated and normally useful information is contained in the ticket.
however, sometimes only a plain error message is available on a page, for example, 'bad request'. that's it. So what would be the best way in this case to track down what goes wrong? Logging? If so, how do we do it properly?
if no obvious error message is shown, but the app doesn't perform as expected. Usually, I use a debugger with breakpoints to check it out. Any other suggestion?
Any experience/insight is extremely welcome.
You can detect errors at your model or controller layer by adding unit tests. That will help narrow your debugging efforts, especially when the error ticket system breaks down. Unfortunately the web2py documentation doesn't stress the importance of unit tests enough. You can run doctests on your controllers with
python web2py.py -T <application_name>
Since the model layers run for each controller, you will at least find syntax errors in your at the model layer.
The latest version has an integrated debugger. You can set breakpoints on your code and step through it.
The other suggestions are good. I would also suggest the Wing IDE debugger. It isn't very expensive, and works well with Python generally and web2py specifically.
Wing has a capability to do remote debugging -- very useful when you're working through production-style deployment with remote app servers. That capability saved my bacon any number of times.
As #Derek pointed out there is an
integrated debugger for web2py
You can set a breakpoint from the integrated Web2py editor (clicking on 'toggle breakpoint') or setting it manually as indicated in the above link.
Once you hit the breakpoint, you can open http://localhost:8000/admin/debug/interact (if running locally to evaluate any expression at that point.
I am having to use a 3rd party ActiveX DLL in my VB6 application. However, now that I've included the DLL in the references and used it in code, every time I quit my app, it also quits VB6.
I don't see anything in the logs or event viewer that would suggest why this is happening.
Is there anyway to prevent this?
Btw, I have contacted the vendor, but they are focused on their .NET products, it seems.
You may not be using the component correctly by missing specific initialization or termination calls, which has the affect of bringing down VBIDE. This usually happens when the third-party component or your application make Win32 calls.
I have had a few applications that I ran them through the debugger, they always terminated VBIDE. Yet, running the Release or Debug versions normally, resulted in the applications working just fine.
Try switching DEP off for VB6.exe only or altogether.
Also, this might be a license checking issue i.e. registry permissions -- try running VB6 IDE as Administrator (right click->Run as Admin)
I'm not a VB6 programmer by trade. I just mess around with the stuff. I have heard of this scenario referred to as sub-classing. Run a search on pscode.com. They have code and tutorial examples about how to prevent it. Good luck.
Ouch. I feel your pain.
Can you switch to a .Net component, and use it from VB6 via interop? I.e. write a COM-visible wrapper in VB.Net?
Just to close this question out... After spending significant time trying various things I ended up writing code that unloaded the control, paused for 5 seconds and then quit app. That seemed to do the trick.
I will apologize in advance as this post is born out of severe frustration.
I have a classic asp website that has been running on Windows 2000/IIS5 for years, and another ASP.NET 2.0 site that we've recently started running on the same servers. So far, everything is running well.
Last year, I tried upgrading (fresh install) to Windows 2003/IIS6. The classic ASP site was much slower, about 50% slower based on logs/stats averaged over weeks of use. I tried everything to find out what was slow. Network tweaks. Integrated. Classic iis5 mode. In process. Out of process. Nothing ever made things better and I soon rolled back to IIS5/2000. The very day rolled back, performance went right back to where it was. This happened on more than one server. Eventually, I gave up and chalked it up to 2003 TCP issues of some sort.
I recently installed a Windows 2008/IIS server on a similar, but more powerful machine in hopes that things were better. Much to my happiness, my ASP classic app is faster under Windows 2008. Unfortunately, my ASP.NET app is 50-75% slower for now apparent reason. All of it's content loads. It's on the same network as the 2000 machine. The site was copied directly from the other machine, and it's a precompile web app from studio 2005.
While the page does hit the database and another server for initial data, it caches it from there for quite a while, it also uses the same db servers as the classic site, which is fast, so I know it's not necessarily a connection issue.
I've tried the default app pool and the classic .NET pool Made no difference. Upped./check the max threads, max per cpu in all the usual locations, web garen or no nothing seems to matter. I've double checked that the compilation debug=false is still set in the web.config.
For a quick benchmark, I used ab.exe (Apache Bench) to send 10 request, 1 at a time. Even if I use IE or Firefox to hit the site, it's clearly slower than under 2000, even according to firebug.
At this point, I'm frustrated and at a complete loss as to where to start. Has anyone been through this sort of mess before?
Speed depends on many factors. You do need to measure performance just on the server to understand if this is server issue. Enable tracing for your web site in web config and see which part/function is slowing it down. You can add you own tracing after each operation to see which block of code is slowest. I'm sure you will find things that you can improve/optimize once you which part of the page is the slowest.
In my case, the answer turned out to be simple one I fired up WireShark. There was 1 external resource request that could not be resolved since the test machine had no direct access to the internet like the live machine did.
It's always the little things.