Lately, I am getting more engrossed in learning Oracle and Geospatial systems. I feel that mapping systems, combined with solid data structure are two technologies that are making their niche in today's market.
If you are starting to learn about these technologies, where would you recommend starting off? If I understand correctly, the best way to learn them would be through actual work (or hobby), but I can't seem to find good places to get the resources to do so.
I would appreciate any advice, tips, resources and information everyone could provide to jump-start my learning and understanding of these technologies.
Thanks.
Update:
Saw a nice PDF relating about this, but for a hobbyist wanting to learn it, are there free tools to start off with it?
http://download.oracle.com/otndocs/products/mapviewer/pdf/mv11g_spatialvis_inobiee.pdf
You appear to be interested in OLAP/BI combined with GIS/mapping.
See information on Spatial OLAP (aka SOLAP) at http://www.spatialbi.org/ , as well as this list of tools at http://spatialolap.scg.ulaval.ca/DevApproaches.asp
Also, see GeoKettle at http://www.spatialytics.org/
Related
These frameworks are the future of speed internet. But I can't find any benchmark or feature comparison of them on google. What framework in which situation would be better for example for building highload online shop? For building stackoverflow clone?
Maybe some basic memory management and request handling differences explanation, please?
Though the official documentation links to techempower, ChicagoBoss is not mentioned anywhere. Looking closely at ChicagoBoss it seems to be targeted mostly at Erlang developers, which is not the most popular language out there. I'm a fanatical about Phalcon, but I feel that ChicagoBoss would be faster and more resource efficient out of the box. But… writing your entire app in binary code right away would be even better in that sense.
Phalcon in less than two years achieved bigger popularity and reputation than ChicagoBoss did in five. There is significantly more information and support out there for Phalcon given all standard PHP rules and information apply to it as well. Phalcon next big release is under active development and looks very promising.
What framework in which situation would be better for example for
building highload online shop? For building stackoverflow clone?
I'm certain that neither Amazon or SO use either of them but both rely on a lot of caching and infrastructure optimisation to get where they are – the job for a different type framework.
Phalcon is a great lightweight tool for building unique projects with focus on high performance. It behaves very nicely with PhpStorm and the development / debugging is a pleasure most of the time. But be sure, it will give a lot of headache (there are a few bugs and some information is hard to come by) – isn't the best choice for enterprise software, you will spend a lot of time figuring out how things work and how to fix some of them.
This is not a technical question, but want to have suggestions from more experienced people regarding my career.
I have been working as UNIX admin from past 13 years, majority of Solaris and couple of years on Linux. Now, I want to learn something more which can excel my career. I have been hearing a lot about Hadoop/Big Data from quite sometime. I do not have any programming or scripting knowledge, neither have knowledge of apache or any database.
- I am assuming that there are two different job profile, Developer and Admin. Am I understanding it correctly ?
- Do I need to learn apache, database, java to learn Hadoop (Even for Admin job profile) ?
- At my place training is expensive. if I want to start study with books, which book should I start with ? I can see popular ones are "Hadoop: The Definite Guide - O'Reilly" and also "Big Data for Dummies". (I am asking from beginners level).
Please help with my doubts. Your suggestions will help me to take decision.
(Moved from comment because too long.)
In order to administer Hadoop in any meaningful way you need to know a fair bit about (a) how Hadoop works, (b) how Hadoop runs its jobs, and (c) job-specific tuning.
I don't know what "learning Apache" means; Apache is a conglomerate of projects, unless you mean the web server itself.
"Learning databases" is too broad to be useful, and Hadoop isn't a database (HBase is).
You don't need any Java knowledge to administer a Java-based program, although knowing about JVM options, how to specify them, and generalities is certainly helpful.
There is a lot to digest, I would start very small, e.g., intro books. Also, keep in mind that there are other solutions besides Hadoop, and a lot of different ways to actually use Hadoop.
The Kiji project is a good way to get Hadoop/HBase/etc up and running, though if you're interested in doing everything "from scratch", it's not the best path.
A couple years ago I saw a fantastic presentation on machine learning based on using Google as the data source. The idea was to leverage Google and Ruby to get more people involved in the concepts of machine learning since massive amounts of data are now readily accessible. For the life of me I have not been able to find this presentation. I realize that this wouldn't normally be a very good format to ask this question, however the content was so valuable and well presented that I felt we would all be enriched by having another pointer to this information.
Although I realize this is somewhat vague, Can anyone refer us to this original video presentation?
If not, could you share some useful links that would get one started down this road of machine learning leveraging massive data sources that now exist and are generally available?
As was noted in the comments the link is: Intuition & Data-Driven Machine Learning
He particularly piqued my interest with this quote: "... in certain cases, you are simply better off working on getting more data, then spending your time on improving the algorithm..."
Excellent presentation and presenter (Ilya Grigorik)! Highly recommended for anyone wanting to start down the path of machine learning.
we're currently developing a fairly complex web portal. To improve the user experience, we want to provide a context-sensitive online help system that can aid the user in understanding certain aspects of the site.
In our case, the site has a variety of widgets that display all kinds of tabular data, graphs, etc. For instance, one such widget may display the VIX and a the help system would offer a brief description of what the VIX is.
Now, I've looked around in the internet and found some interesting articles such as the Design Checklists for Online Help, but most of what I found seems fairly outdated. What I'm specifically interested in are design issues such as these:
whether (or when) to use popups, divs, or link to external pages
how comprehensive should the help entry be? how much is the average user willing to read?
what's a good way to provide access to the help system? cluttering the UI with questionmark-icons is certainly not optimal
should the help entry be loaded on demand with AJAX (kinda sucks, you want the info right away) or preload it (causing tons of unnecessary traffic)
other dos and don'ts
The answers to some of these questions may seem obvious, but when it comes to usability I've made the experience that the intuitive answer isn't always the best. Secondly, I'm a software developer and as such I tend to look at things from an engineer's point of view. And I think we all know that this is, more often than not, a pretty poor angle from which to approach the design of a user interface. This is why I would very much like get some feedback from people more experienced in this field.
See here:
https://ux.stackexchange.com/questions/1351/best-practices-for-online-help
Incase if you are to develop twitter today what language, tools and approach will one take. How will he start from the very frugal configuration and gradually scale to the levels twitter has reached today. Incase if you can provide direct responses like (PHP+ Apache+ memchached+ MySQL) or (JSP+TomCat/Glassfish+ MySQL / other db) etc.
The criteria is an architecture which scales easily without much engineering and the right language so that one doesnt need to rethink his decision once the same is in place.
(As far as I know, Twitter is RoR, Linked in is Java and Digg in Php. So not looking for just random thoughts :) ) Do support why do you think your option should suffice.
Thanks
As you already say it, there are several applications that shows that several technologies are able to scale. Fortunately for them.
I think you should not focus only on "is this technology the best for scaling". But on the two following points :
Do you have skills in that technology ?
Is that technology adapted (by it's philosophy) to that application ?
Scaling is a thing. But if you can't develop your application with the "killer" technology because you don't understand it, it's anyway useless.
I recommend looking at the High Scalability website. You can build a scalable web app in virtually any language, but it's not just a matter of using the right technology and then plugging it in. You have to know what you're doing, no matter what technology you use!
Twitter was developed using the framework Ruby on Rails (ROR), and that seems to be a good choice. Ruby on rails is database agnostic (supports most databases), very scalable and very good for developing web applications quickly.
Cake is a popular alternative for PHP I haven't used Cake but hear it is very similar. The alternative to these open source alternatives would be a full blow enterprise environment like the microsoft .NET frameweork.