As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm designing a game, but this question is applicable to any situation that requires bidirectional communication between nodes in a cluster and a main server. I am pretty new to clusters, but I actively program in Go and occasionally in D.
I really want to use a modern language (not C/C++), so I've chosen these two languages because:
Array slices
Good concurrency support
Cross platform & compiles natively (with multiple compiler implementations)
GC (both working on a precise GC)
I've read https://stackoverflow.com/questions/3554956/d-versus-go-comparison and The D Programming Language for Game Development.
At a high level, my game will do most of the processing server side, with the client just rendering the game state from their perspective. The game is designed to scale, so it will need to act in a cluster. Components are mostly CPU bound, and update to a main server asynchronously, which shares game state with clients. Most computation depends on user input, so these events need to be sent down to individual components (hence bi-directional RPC).
Reasons I like D:
Manual memory management
Templates/CTFE
Code safety (#safe, contracts, in/out)
Reasons I like Go:
Standard library (pprof, RPC)
Go routines
go tool (esp. go get -u to install/update remote dependencies)
The client will likely be written in D, but that shouldn't have an impact on the server.
I am leaning towards D because manual memory management is baked into the language. While it doesn't have the nice libraries for RPC, I could theoretically implement that, but I cannot as elegantly implement manual memory management in Go.
Which would you use for this problem given the choice between the two languages?
I expect that either will work and that a lot of it depends on which you prefer, though if you're doing the client in D, I'd advise doing the server in D simply because then there are fewer languages involved. If you use two languages, then anyone working on your project generally has to know them both, and both Go and D are small enough in terms of their user base at this point that few people will know both - though if it's just you working on it, you obviously know both of them already.
However, I would point out that if the problem with using D is the lack of an RPC library, then that isn't a problem, because D is supported by Apache Thrift. So, D does have a solid RPC library, even if it's not in its standard library (in fact, it was one of the fruits of D's first round of participation in Google's Summer of Code).
I do not know anything about your game, If good concurrency for your server is important then I vote for Go.
I developed communication server in Go that implements communication with PUSH technology. Go is great in for such tasks. Compact clean code that is easy to understand.
Automated memory is important in concurrent apps.
Client apps are not so concurrent like server apps.
Client apps should keep constantly high frame rate.
So manual memory management without global GC locks are better for client apps.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What language, between Go and Rust, would you use to create a library for games (no bindings)?
Go is a simpler language that leans more heavily on garbage collection. Rust is a more complex language that can be safely used without the GC at all which is perfect for low-level systems programming.
I'm biased since I spent two summers working on Rust, but if you're willing to invest the necessary time to keep up with a rapidly changing language, Rust would be really good for games. It has a really nice set of built in concurrency primitives, so it would be easy to separate the different components such as the rendering engine, the AI, etc. and take advantage of multicore computers. It's also possible to avoid the need for garbage collection, so you don't have to worry about unpredictable GC pauses. It's designed to integrate nicely with existing C code, and many of the data types map directly onto C types. Rust's approach to polymorphism leads to some really nice assembly once LLVM is done with it.
Many games nowadays are running in the web browser, which suggests that web browsers and games have similar requirements. Mozilla is designing Rust alongside its new parallel browser engine, which means the language will continue to evolve in ways that would work well for game programming too.
Rust: This is alpha-level software with many known bugs, incomplete features and planned future changes. Use at your own risk, expect some instability, disruption and source-level incompatibility for a while yet.
No good for commercial game.
You can't make library with Go for games at all. There is no support to create library in Go. With Go you can create mobule(library) that you will use only with Go.
You can use C++ library in Go. But you can't use Go lib in C++.
You may ask what language is better for games Rust or Go.
UPDATE 2015 year
Go 1.4 has office/beta support for Android and Go 1.5 (2015 Summer) will have iOS support.
Right now it is tricky to build for android. You have to install docker image
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm looking for a tool that will allow me to test a Rails-based JSON web service. LoadRunner would fit my needs, but I need a free solution.
JMeter is free and scriptable, you should have a look.
What is your virtual user need? Some of the commercial tools offer no cost versions at a limited load level, so before addressing your need I am looking for more specifics on the virtual user number requirements.
For clarification, are you looking for a tool which can produce a ACM/IEEE definition stress test from a scheduler perspective? This would be a test which increases in load by a defined interval every ~n~ seconds|minutes|hours until the system collapses or a particular metric is achieved, such as response time exceeds SLA value by 250% for five minutes or CPU is greater than 90% for 45 seconds, etc.... Schedulers are all over the map in the tools space, some are better than others when it comes to Stress, most work equally well for a defined load level.
How does monitoring fit into your tool model? Are there specific architectural components which you would like to monitor which would drive a tool? This will help you identify system bottlenecks in the use of resources on architectural components.
What about your team skills? You mention scripting, but how much are you expecting the tool to handle for you. Some of the open source tools are great, but they mandate that a person be a highly skilled developer to get the most out of the tool. The commercial side rounds some of the edges off of the tools, but in general you are still going to need to be proficient in the language of the tool. If you need Python, that takes you one path, Java another, VB a third, Pascal a fourth, C a fifth, etc.... Sometimes its easier to document what languages you know and know well and concentrate on tools that fit that model as trying to learn a new tool and a new language at the same time rarely yields benefits.
Have a look to AgileLoad it is free for small test and provide both recording and advanced scipting features. It is compatible with JSon service. It is quite easy to use, there is also tutorials and video on how to use the tool on the website. Support is free and the support team can helps you with scripting process.
I'd also take a look at The Grinder. It has a nice feature where you can create your load script by recording your browser activity.
There is a version of Load Tester that is free and has no limits on the number of virtual users you can run: Load Tester LITE.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm considering writing an app that has the following requirements. I'm proficient with Ruby, but I'm willing to learn a new language like Scala, Clojure or Python.
Concurrency / Best performance
This is my main goal. It needs to be amazingly fast and support concurrency in a decent way.
Use Redis as a back-end
This won't be a big problem, redis has a wide range of drivers available, but it may influence the final decision on a language/platform.
Websockets support
Good support for websockets is a must. Using an add-on library (like Cramp for Ruby::EM) is okay.
Options
I've gathered the following options:
Ruby EventMachine
Python Twisted
Node.js
Clojure
Scala
Java
Writing raw C or assembler are not viable options at this time.
Concurrency
Ruby 1.9 still uses the GIL, where as all JVM based solutions can use native threads. I'm not sure about Node.js in this case.
How does the selected language affect performance?
The question
What do you recommend and why? Do you have hands-on experience? Please enlighten me (and the rest of StackOverflow)
Clojure is about twice as fast as node.js, which is about three times faster than Python which is held to be faster than ruby.
I'd vote for Clojure if high performance concurrency is your main criteria. Clojure was basically designed for concurrent development from the beginning, and there have been some impressive Clojure demos running on 800+ core Azul boxes.
It is very much worth looking at this video presentation to understand Clojure's approach to concurrency:
http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey
The main trick in Clojure's concurrency performance is a clever implementation of Software Transactional Memory (STM) that lets you conduct many concurrent transactions without complex and expensive locking schemes. It also uses persistent data structures to give immutability and efficient management of multiple versions of data. It's very cool.
As for general purpose performance, Clojure is pretty fast already and getting even faster with the new 1.3 alpha branch. A stated aim of Rich Hickey (Clojure's creator) is to allow you to do anything in Clojure with the same speed that you can do it pure Java.
Other things in Clojure that I really like but may or may not be relevant to you:
Hugely powerful LISP-style macro system - "code is data" and you can manipulate it as such
It's a fully fledged functional language
It's dynamically typed by default (for flexibility and quick prototyping), but you can add static type hints if you need to (for better performance)
Excellent JVM / Java integration, so you can make use of all the good Java libraries and tools out there (e.g. Netty for event-driven server communications)
On Clojure you could use Grizzly for async http processing and comet/websockets based apps.
Redis is a great choice to cache and create a powerful distributed session with pub/sub protocol built-in
Another big thing is use RabbitMQ or ZeroMQ to simulate a agent based distributed system and provide group, data or integration services for example.
Is relative ... a like clojure a lot and agree in parts with you , where clojure is one of the fastest language on jvm .
But the knowledge on language is essential and could confirm our feeling.
Some interesting links on benchmarks , performance and comparisons :
http://bit.ly/dtqHAG
"Premature optimization is the root of all evil", by Donald Knuth
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm writing this as DevConnections in Las Vegas is happening. Visual Studio 2010 has been released and I now have this 3GB beast installed to my machine. (I'll admit, it has some nice features.)
However, while the install was monopolizing my computer's resources I began to wish that my IDE worked more like Google Documents (instantly available, available anywhere, easy to share, easy to collaborate, naturally versioned).
A few Google (and StackOverflow) searches led me to :
Coderun
Bespin
I'm well aware that these IDE's are missing a lot of what exists in VS 2010. However, that isn't my question. Instead, I'm wondering what benefits a web-based IDE might have? Assuming a company invests the time to create the missing features, what is the downside?
Benefits:
Code available anywhere an internet connection is available
Simple sharing mechanisms
Simplified build mechanism
Many modern IDE features available (Autocomplete, syntax highlighting, etc...)
Requires a modern browser
Drawbacks:
Code is only available where an internet connection is available
Requires a modern browser (this might be an issue in some corporate settings)
Simplified build mechanism
At the mercy of the latency gods
No native debugger
No choice of revision-control
No clear backup solution
No clear way to fully remove source code from the provider's servers
No support available
No choice over maintenance schedule of servers
No control over IDE or environment features and tools
Must trust provider's security and privacy controls
As you can see, many of its benefits are also potential drawbacks. So I think the use of a browser-based IDE is very project dependent.
However, IMHO, I don't think browser-based IDEs have enough features or provide enough necessary services to replace desktop IDEs in most modern enterprises.
Just being devils advocate here and listing the disadvantages:
Disconnection!
The fact that you don't really own any software - if you stop paying the monthly bills you can't access it any more but you can keep using offline installed products after the initial payment.
Big / valuable projects may be uncomfortable not having their source code tucked away inside a network they control - one hacked account and their main IP is out on the net.
Limited extension ecosystem - with online services there is generally a control over it like facebook for example, but nobody tells resharper what features they can include
Forced upgrade - big corporations are still running .net 2.0 (.net 4 just came out). They can be slow to move and being forced to use the latest and greatest version of the app could be a too fast a pace for them.
Exposed to bugs - some people have wierd personal rules like they dont touch v1 software. If you always have the latest version you are exposing yourself to being hit by productivity consuming errors (security updates are a different category to feature updates but still if you are running desktop software you can isolate your security exposure and decide your own reasons to upgrade)
Interoperability - perhaps your app works with another app - they might not be able to keep up with the release pace of the main app and the interoperability functionality might lag while the other developers play catch up.
Centralised point of failure - no control over backups, redundancy, etc - its in the hands of the developers of the service.
Personally I find cloud based services very convenient and as time goes on now that I have a laptop and a desktop and a work computer and my friends have computers it becomes a chore to sync data between the lot. At the current stage we are still dealing with toy apps on the web but hopefully in a few years Silverlight will put a big dent in that.
The web is inheritly less featureful than a native application. Also, how do you compile and test out your code? No sane web host will let strangers compile, run, and test their code on their servers.
Besides "ubiquitous" availability (note the quotes), you get the "benefit" of editing code on the server. So, you get to skip many of the deployment steps that are necessary for many server side apps today. There's a simplicity of editing code like you'd edit a blog, but it can also be a curse as well. You still need a way to separate development from production.
But that said, if you use the Blog or many CMS applications, millions of folks use "Web based IDES" every day, so there's obviously applicability for specific application areas. I can tell you there are times I wish fixing a quick bug on a deployed app were as simply as clicking an "edit" button.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
How has the current economic downturn affected the way you/your team works ?
I am tending to do more enhancements, compared to brand new development a year or so ago.
This question came about during another pub conversation where we were discussing if it's good to work on supporting applications or working on new projects - which is more stable, for the foreseeable future, with companies cost cutting in all areas..
I mainly work on extending existing applications. I would say this is probably the safer of the two options also. More than likely people are already using the existing applications, and because of that you don't need to convince them it would be advantageous for them to start using it. From a business perspective, it is a lot easier to justify an expense than you already have than to try and add an additional one.
Number 3: rewriting existing apps (the guy who used to do my job suuuuccccckkkked).
Definitely seeing a downturn in large scale or new projects in general though, which is kind of the programming equivalent of saving not spending. Actually it's the literal equivalent of that, which is a problem for getting out of a recession.
Good question. I am at present working with project that has good customers and a decent revenue. So, the economic downturn did not affect much.
My suggestion is if there is a choice between choosing enhancing the existing projects or new projects, its better to go for the revenue generating existing projects. And investment in R&D projects may be reduced.
I believe "supporting" and bug-fixing on existing projects would not bring your much challenge and consequently experience. It can be a huge time waste for the career.
I am working on porting an existing business application to a new platform, which combines some of the aspects of work on an existing app, and some new stuff.
Its new because everything is going from Windows Forms to ASP.NET AJAX, and there are several changes involved in that process when it comes to the GUI and event based side of things, but its also partially work with existing stuff because the business rules are the same, the database is the same, although we have been gradually making improvements as needed to those.
On the other hand the company I work for supplies grocery stores which have been affected positively by more people eating at home, so despite being in Michigan, things are going well for the company, and we can afford to move this app onto the intranet.
The nice part about doing this is I get to learn all the new platform stuff, but we don't have to go out and get user input for some new set of use cases, plus we can work with the input we've received from the WinForms version.
I'm rewriting our existing applications. The fundamental design of the original applications wasn't flexible enough to handle our new business needs. Combined with questionable coding practices (a lack of separation of model, view and control and aging technologies with a lot of "NIH" syndrome) it was decided that rewriting the non-central portions of our applications was best.
Sadly, I'm not entirely sure I'm 100% qualified for this, but, I seem to be the most qualified of our team.
90% of my job is maintenance, or seems to be. But surprisingly, I've got about four projects of new development going or in the pipeline.