Code Combat not interacting locally - macos

I wanted to install Code Combat locally to be also able to understand it better. I have followed the steps (for Mac OSX, I have Yosemite) described at: https://github.com/codecombat/codecombat/wiki/Developer-environment
Everything worked. I have all the scripts running, without problems, mongo is up and running, the game is starting, but then, the game itself can't be proceeded.
I haven't restored the mongo dump, which is 2GB fat and which I can't download easily with my current internet connection, but it seems to be optional.
Looking on the console, I have a couple 404 that I can't explain, see below. If somebody could help me to get the game running locally, I would be very grateful.
GET /db/thang.type/529ffbf1cf1818f2be000001/version 404
GET /db/level/dungeons-of-kithgard/session 404
As well as the mp3 files, which I am fine not to have.
Thanks in advance,
Matthieu
PS: I would have liked to specify more tags, but as it concerns many languages and doesn't have a specific tag, I didn't know which one to add

OK, it's mandatory to have a mongo dump to have the base elements for the levels. But 2GB is too much and a solution will attempt to be found to be able to export only the required data. 200MB should be enough
UPDATE: With the following ticket being implemented, the size is reduced to less than 100MB: https://github.com/codecombat/codecombat/issues/1988

Related

Debug CreateML Documentation

Ive trained a couple action classifier models, and a few object detection models with no issues. Recently tho everything started crashing, not sure why, i have not updated anything on my computer. Does anybody know how to debug the application? I've been looking for documentation, however I have not been able to find any info that would help me debug common errors. More info below on issues i've had.
I am having multiple issues depending on what I am trying to train. I have another question open for an ioaf code error. While waiting on response, i started working on another model, on a different laptop. This model is to recognize users action. However, this one is now failing with "asset contains no video tracks". this is not making any sense, to me. I am unable to find any documentation to debug or read any error logs from CreateML, and their technical support is no support.
Has anybody run into a similar issue, or know how to debug the application? Im trying to figure out where its failing, i've opened several of the video files to see if they are corrupted and so far none have been. This is not a good debug method for small teams and your dealing with hours of video clips or thousands of images.
Sometimes I have this issue when some python libraries be updated and change import things, like the imput structure. So check this. I reccommend you to upload your errors in order to help you. Try also to work on other editor like google colab, but not that there every variable is global.

How to start interacting with the ACR122U-A9 NFC reader?

I'm a junior PHP/JavaScript/HTML developer, recently hired by a company that makes photobooths. I had never worked in a Ubuntu system before this. This I find relevant because I think that for this reason I might be skipping an obvious step or something like that.
One of the projects I have to work on is adding a NFC device on the photobooths, so the user can just tap the area with their phone and get the pictures they just took. Sounds easy.
A previous employee bought an ACR122U-A9 device, that connects via USB, but they weren't able to make it work. I took the device and followed every single tutorial I have find out and I had no luck either.
What I have achieved after installing a great deal of things and blindly following tutorials is just this:
If I open a terminal, and type "pcsc_scan" it detects the device and it kind of "works", reading the cards if I tap them. I get some hexadecimal codes and some blue text that doesn't say anything to me. And while I do this I can't even type in the terminal so I cannot do anything at all to it.
What I actually want is to know how to make the computer speak to the NFC device, not listen to it. Well, I guess it has to listen to know when to send info.
I think that I'm missing something very obvious, because every tutorial I find just explains what kind of code you need to write to do X thing or how to make the device emulate a card or things like that... But I think I need something WAY more basic:
How do I even start to work and interacting with it?
Info that might be relevant:
I didn't specify how I got to the point that writing "pcsc_scan" makes something because A) I've done so many tutorials and different things that I don't remember what part of what I did accomplished this and B) I'd like to start from scratch in order to understand what am I doing.
I'm working on a Ubuntu 17.10 machine, but the final product will be working under Windows (different versions of it depending on the Photobooth)
Our photobooths work with a web-api in localhost. Everything is either PHP, JavaScript, CSS, or HTML. In the end I will need a way for the device to get the info it needs from one of these languages (if possible)
I'm still struggling with Ubuntu. Everything you try to install or interact with in this OS is done via commands that I don't completely understand and I repeat from random internet tutorials or forums like a parrot. Fixing this is not part of the question, I'll eventually learn this, but I think it might be useful to know that I might not even know some things that should be obvious or basic about it.

Website running slow in IE with no specific reason why

We have a website that a few people are complaining about it running extremely slow. We're struggling to figure out why and to even recreate it. Most are mentioning that it's running slow in IE.
It's not limited to any specific section of the site, just the whole thing in general.
There's been several developers creating/adjusting the code so it's overly bloated but we can't see any specific reason why this should happen.
Can anyone see why?
We've also run a speed test:
I was running a profiling test with IE on your website and there is a call to:
http://www.playforce.co.uk/-ms-transform.htc
Which is giving error 404 not found and taking 1 second to complete (0.91 sec).
If is found all around your css under this line:
behavior:url(-ms-transform.htc);
Am no expert !! Am welcome to better suggestion and corrections of what i am about to say
You can try using a trial version of Borland Silk meter ..
They tend to measure the speed with which each element loads using various browsers and various geographical locations which are configurable by you.
Also , since only some of the user's are complaining about the speed being an issue you should also check the speed of their internet and their browser version and other addon etc of those users. Because sometimes the problem is not only with the server .
Try the above tool to confirm nothing is wrong in your server and then proceed to checking the client's browser and network.

RailsApp to (kind of) Standalone?

I did a first small Rails-App. I thought about running it purely localy at a friends computer (It is made to track his students payments, for different schools).
Is there a chance to do a small Ruby-Programm that:
Starts the internal Rails-Server
(optional: open the Standard-Browser)
Saves a copy of the database on shutdown (in case of a crash)
General question: Is this even possible (without learning c)?
I download "SHOES" - which was the first 'tool' that seemed suitable for my task. Any link, clue or tutorial would be appreciated, thanks in advance.
Update:
I used WEBrick while developing, and I also would use it in "deployment". The problem behind my question: I want to run it on a friends computer who is not into computers that much, so using the pc-console would not be the first choice.

"Works on my machine" - How to fix non-reproducible bugs?

Very occasionally, despite all testing efforts, I get hit with a bug report from a customer that I simply can't reproduce in the office.
(Apologies to Jeff for the 'borrowing' of the badge)
I have a few "tools" that I can use to try and locate and fix these, but it always feels a bit like I'm knife-and-forking it:-
Asking for more and more context from the customer: (systeminfo)
Log files from our application
Ad-hoc tests with the customer to attempt to change the behaviour
Providing customer with a new build with additional diagnostics
Thinking about the problem in the bath...
Site visit (assuming customer is somewhere warm and sunny)
Are there set procedures, or other techniques than anyone uses to resolve problems like this?
One of the attributes of good debuggers, I think is that they always have a lot of weapons in their toolkit. They never seem to get "stuck" for too long and there is always something else for them to try. Some of the things I've been known to do:
ask for memory dumps
install a remote debugger on a client machine
add tracing code to builds
add logging code for debugging purposes
add performance counters
add configuration parameters to various bits of suspicious code so I can turn on and off features
rewrite and refactor suspicious code
try to replicate the issue locally on a different OS or machine
use debugging tools such as application verifier
use 3rd party load generation tools
write simulation tools in-house for load generation when the above failed
use tools like Glowcode to analyse memory leaks and performance issues
reinstall the client machine from scratch
get registry dumps and apply them locally
use registry and file watcher tools
Eventually, I find the bug just gives up out of some kind of awe at my persistence. Or the client realises that it's probably a machine or client side install or configuration issue.
Extensive logging usually helps.
The easiest way is always to see the customer in action (assuming that its readily reproducible by the customer). Oftentimes, problems arise due to issues with the customer's computer environment, conflicts with other programs, etc - these are details which you will not be able to catch on your dev rig. So a site visit might be useful; but if that's not convenient, tools like RealVNC might help as well in letting you see the customer 'do their thing'.
(watching the customer in action also allows you to catch them out in any WTF moments that they might have)
Now, if the problem is intermittent, then things get somewhat more complicated. The best way to get around this problem would be to log useful information in places where you guess problems could occur and perhaps use a tool like Splunk to index the log files during analysis. A diagnostic build (i.e. with extra logging) might be useful in this case.
I'm just in the middle of implementing an automated error reporting system that sends back to me information (currently via email although you could use a webservice) from any exception encountered by the app.
That way I get (nearly) all the information that I would do if I was sitting in front of VS2008 and it really helps me to work out what the problem is.
The customers are also usually (sorta) impressed that I know about their problem as soon as they encounter it!
Also, if you use the Application.ThreadException error handler you can send back info on unexpected exceptions too!
We use all the methods you mention progressively starting with the easiest and proceeding to the harder.
However you forget that sometimes hardware is at fault. For example, memory could be malfunctioning and some computation-intensive code will behave strangely throwing exceptions with weird diagnostics. Of cource, it works on your machine, since you don't have faulty hardware.
Experience is needed to identify such errors and insist that customer tries to install the program on another machine or does hardware check. One thing that helps greatly is good error handling - when your code throws an exception it should provide details, not just indicate that something is bad. With good error indication it's easier to identify such suspicious issues related to faulty hardware.
I think one of the most important things is the ability to ask sensible questions around what the customer has reported... More often than not they're not mentioning something that they don't see as relevant, but is actually key.
Telepathy would also be useful...
We've had good success using EurekaLog with it posting directly to FogBugz. This gets us a bug report containing a call stack, along with related system info (other processes running, memory, network details etc) and a screen shot. Occasionally customers enter further info too, which is helpful. It's certainly, in most cases, made it much easier and quicker to fix bugs.
One technique I've found useful is building an application with an integrated "diagnostic" mode (enabled by a command line switch when you launch the app). That certainly avoids the need to create custom builds with additional logging.
Otherwise, it sounds like what you're doing is as good an approach as any.
Copilot (assuming customer is somewhere cold and rainy :)
The usual procedure for this is to expect something like this will happen and add a ton of logging information. Of course you don't enable it from the beginning, but only when this happens.
Usually customers don't like to have to install a new version or some diagnostic tools. It is not their job to do your debugging. And visiting a client for cases like these is rarely an option. You must involve the client as little as possible. Changing a switch and sending you a log file is OK - anything more than this is too much.
I like the alternative of thinking the problem at the bath. I will start from trying to find out the differences between my machine and the client's configuration.
As a software engineer doing webstuff (booking/shop/member systems etc) the most important thing for us is to get as much information from the customer as possible.
Going from
it's broke!
to
it's broke! & here are screenshots of
every option I picked whilst
generating this particular report
reduces the amount of time it takes us to reproduce and fix an issue no end.
It may be obvious, but it takes a fair amount of chasing to get this kind of information from our customers sometimes! But it's worth it just for those moments you find they're not actually doing what they say they are.
I had these problems also. My solution was to add lots of logging and give the customer a debug build with all the possible debug information. Then make sure dr Watson (it was on Windows NT) created a memory dump with enough information.
After loading the memory dump in the debugger I could find out where and why it crashed.
EDIT: Oh, this obviously only works if the application terminates violently...
I think following the trail of the actions user took can lead us to the reasons of failure or selective failures. But most of the times users are at loss to precisely describe the interactions with the applications, the automatic screenshot taking (if it is desktop app. for .net app you can check Jeff's UnhandledExceptionHandler). Logging all the important action which change state of the objects can also help us in understanding it.
I don't have this problem very often, but if I did, I would use a screen sharing or recorded application to watch the user in action without having to go there (unless, as you said, it's warm and sunny and the company pays the trip).
I have recently been investigating such an issue myself. Over the course of my carrier I have learnt that, while computer systems may be complex, they are predictable so have faith that you can find the problem. My approach to these kinds of issues two fold:
1) Gather as much detailed information as possible from the customer about their failure and analyse it meticulously for patterns. Gather multiple sets of data for multiple failure occurrences to build up a clearer picture.
2) Try and reproduce the failure in house. Continue to make your system more and more similar to the customers system until you can reproduce it, the system is identical or it becomes impractical to make it more similar.
While doing this consider:
1)What differences exist between this system and other working systems.
2)What has recently changed in your product or the customers configuration that has caused the problem to start occurring.
Regards
Depending on the issue you could get WinDbg dumps, they normally give a pretty good idea of what is going on. We have diagnosed quite a few problems that weren't crashed from minidumps.
For .Net apps we also was Trace.Writeline then we can get the user to fire up DbgView and send us the output.
Its very complicated issue . I was thinking writing some procedure for this . I just made some procedure for this non-reproducible bug . it might be helpful
When the Bug accorded .. There are several factors it might to occur.
I am Sure all bugs are reproducible . I always keep eye for these kind of issues..
Get the System Information
what other process the customer did before that.
Time period it occurs . its rare or frequent
its next action happened after the issue ( its always same or different )
Find the factors for this bug ( as developer )
Find the exact position where this issue happened .
Find ALL THE SYSTEM Factors on that time
check all memory leaks or user error issue or wrong condition in code
List out all facotrs to may cause this issue.
How the each factors are affected this and wat are the data is holding those factors
Check memeory issues happened
check the customer have the current update code like yours
check all log from atleast 1 month and find any upnormal operation happened . keep on note
Just a short anecdote (hence 'community wiki'): Last week I thought it was a clever idea in a Django app to import the module pprint for pretty printing Python data only if DEBUG was True:
if settings.DEBUG:
from pprint import pprint
Then I used here and there the pprint command as debugging statement:
pprint(somevar) # show somevar on the console
After finishing the work, I tested the app with setting DEBUG=False. You can guess what happened: The site broke with HTTP500 errors all over the place, and I did not know why, because there is no traceback if DEBUG is False. I was puzzled that the errors disappeared magically, if I switched back to debug mode.
It took me 1-2 hours of putting print statements all over the code to find that the code crashes at exactly the above pprint() line. Then it took me another half an hour to convince myself to stop banging my head on the table.
Now comes the moral of the story:
Not every thing that looks like a clever idea in the first view is such savvy in the end.
An important point to look at for debugging these errors are all configuration options and platform switches your code by itself makes. This can be quite a lot more than just some user preferences. Document good, if you make an assumption about the user's platform (e.g., if you test for Win/Mac/Linux only, will your code crash on BSD or Solaris?)
Cheers,
However tough a non-reproducible problem is - we can still have a structured and strategic approach to solve them - and I can say through experience that it requires out of box thinking in 50% of the cases. Generally speaking, one can categorize the problems into different types which helps to identify what tool to be used. For example if you have a non-reproducible application crash issue or a memory issue you can use profilers and nail down the issue caused in the particular functionality.
Also, one of the most important approach is inforamation rich logging. I also use a lot of enums to describe the state of the process depending on the scenario in question. for exampe, I used like Initiated, Triggered, Running, Waiting Repaired etc to describe the schedules states and saved them to DB at different stages.
Not mentioned yet, but "directed code review" is one good solution, especially if you didn't do a proper review (at least 1 hour per 100 lines of code) before release.
I have also seen impressive demos of AppSight Suite, which is basically an advanced environment monitoring and logging tool. It allows the customer to record what happens on his machine in an extensive but fairly compact log file which you can then replay.
As many have mentioned, extensive logging, and asking the client for the log files when something goes wrong. In addition, as I worked more with web apps, I'll also provide detailed, but succinct deployment documentation (e.g., deployment steps, environmental resources that need to be set up etc).
Here are common problems I've seen that lead to the types of problem you are describing:
Environment not set up properly (e.g., missing environment variables, data sources etc).
Application not fully deployed (e.g., database schema not deployed).
Difference in operating system configuration (default character encoding being the most common culprit for me).
Most of the time, these issues can be identified through the log content.
You can use tools like Microsoft SharedView or TeamViewer to connect to remote PC and inspect problem directly on site. Of course, you'll need cooperation with customer.

Resources