Currently using a setup that follows: Backbone, Parse, Require, and Marionette.
I've found through my application that I often need to reuse objects I've already pulled down from Parse.
Parse already does this through Parse.User.current(), However it would be great to store other entities locally rather than retrieving them over and over again.
Does anyone have any suggestions in terms of good practices or libraries to use for caching these objects locally or would having global variables that hold the information while the application runs be enough?
The Parse JavaScript SDK is open source, so you could look at the implementation of Parse.User.current and Parse.User._saveCurrentUser. Maybe you could do something similar. http://www.parsecdn.com/js/parse-1.1.11.js
Related
I'm new to Marklogic and mlcp. I'm working on marklogin 9.0-8. I wnat to use mlcp to load content, but since some parameters may need to be dynamically built based on content, does anyone know if it is possible to call mlcp from java application?
Thanks a lot,
Helen
MarkLogic provides two Java-based ways to load content: MLCP and DMSDK. MLCP is intended to be used as a command-line tool (and I believe that's the only supported use).
The Data Movement SDK, on the other hand, is specifically intended to offer very similar functionality in the form of a JAR, making it easy to access from a Java application. I encourage you to look into using that instead.
tutorial
JavaDoc
Asynchronous Multi-Document Operations
12-minute video intro to DMSDK
common tasks made easier through ml-gradle
I am working on a CLI in Go that scrapes a webpage to collect the href attributes of all the links on the page into a slice. I want to store this slice in memory for some time so that the scraper is not being called on every execution of the CLI command. Ideally, the scraper would only be called after the cache expires or the user provides some sort of --update flag.
I came across the library go-cache and other similar libraries, but from what I could tell they only work for something that is continuously running, like a server.
I thought about writing the links to a file, but then how would I expire the results after a specific duration? Would it make sense to create a small server in the background that shuts down after a while in order to use a library like go-cache? Any help is appreciated.
There are two main approaches in these scenarios:
Create a daemon, service or background application that acts as your data repository. You can run it as an HTTP server / RPC server depending on your requirements. Your CLI application then interacts with this daemon as required;
Implement a persistence mechanism that will allow data to be written and read across multiple CLI application executions. You may use normal text files, databases or even an implementation of golang's encoding/gob to write and read your slice (a map would probably be better) to and from a binary file.
You can timestamp entries and simply remove them after their ttl expires by explicitly deleting them, or by simply not rewriting them during subsequent executions, according to the strategy / approach selected above.
The scope and number of examples for such an open ended question is too myriad to post in a single answer and will most likely require multiple specific questions.
Use a database and store as much detail as you can (fetched_at, host, path, title, meta_desc, anchors etc). You'll be able to query over the data later and it will be useful to have it in a structured format. If you don't want to deal with a db dependency you could embed something like boltdb (pure go) or sqlite (cgo).
I have a simple database application in mind and I am thinking of making it browser-accessible instead of creating a standalone one.
I almost finished creating the DB schema in a PostgreSQL Server and I will now start developing. My first idea was using PHP or Ruby On Rails to manage the backend logic and interfacing with the DB, but since this application is fairly simple I think that I can easily implement all business and data manipulation logic with JavaScript or with the DB triggers.
So I am now wondering: is there a way to directly send the queries to a PostgreSQL Server, without server-side scripting?
More generally: can a PostgreSQL(9.3) Server receive the queries in Http requests and provide the results in Http responses?
I know this might sound stupid, and I am not looking for answers like "Use JS for presentation, PHP for logic and DB for data storage". I believe this is a lightweight solution for a very simple application, so I want to try it if possible!
Yes, That is possible.
What you can do is to send it via REST API. (post, get request ).
Here are some reference for you:
https://github.com/begriffs/postgrest
https://github.com/pgrest/pgrest
Please take a look at this for more HTTP API
[update!]
This idea is currently not possible (as I tought when I answered you before).
I tought it was possible after checking this node-postgres library written in javascript but it uses Node.js specific functions not present in the web browser as stated by the library's creator himself and this answer at stack overflow.
There is this package called browserify that exports a Node.js javascript file into a browser front-end ready javascript file. The problem with node-postgres + browserify is that it throw some errors during the browserification process, precisely when it tries to access libpq (an API written in C for accessing PostgreSQL).
I'm sorry I have mistaken you
Yet I still have a suggestion for you. You can try CouchDB if you really want to build a backendless/serverless application. It is natively RESTful, handles authentication and authorization at some extent, is opensource but unfortunately: NoSQL. It processes queries based on Map/Reduce paradigm and Mango query language so it's an entire different world for you to discover if you are used with SQL.
[old answer, I'm leaving it here for learning purposes]
Have you considered using a PostgreSQL driver for JavaScript? It is not RESTful, but it can connect to PostgreSQL and query it!
The library is called node-postgres and you can download it via npm
https://www.npmjs.com/package/pg
Just don't forget to enable SSL connection in the PostgreSQL server and in the client to avoid man-in-the-middle attacks.
An here's a tip: if you need an ACL for allowing or denying selects or inserts for specific users you can manage that through PostgreSQL user management and privileges. PostgreSQL has row level security, allowing you to define which rows in a table can be selected updated and deleted for a given set of users or groups.
I don't really plan on using active record or any of the built in database constructs native to CodeIgniter for database access. I have Oracle, SQL Server, and others. I want to use PHP PDO (unless anyone thinks that's bad) because of the universal aspect of it.
I mainly want CI because of some of the built in libraries and MVC. I also like that it is small and easy to work with.
2.x if it matters.
I did see other questions but none exactly about databases.
Thanks.
edit: It's not that I don't think CI and PHP can take it with large websites. This is solely about using multiple databases of varying companies. I have mostly seen MySQL used with it. I know I can use other databases but again, I don't know if it is more trouble than worth or what.
MySQL is the default just because of how widely-adopted it is, especially in the PHP world. Almost everyone has a *AMP stack to work on so it ends up being the main driver used in almost every example out there.
If you're not planning on using the database class, then it really doesn't matter what type of database you are using, just don't load the class. You can still use routing, helpers, libraries, and other CI features.
So yes, I do think it is suitable for your purposes.
CodeIgniter was built with the idea of being the framework closest to native PHP that doesn't tell you what to do. The entire framework is modular and you are not required to use any single component.
Yes, it is absolutely suited to what you are doing. You can plug and play whatever DB driver you want and CI will not complain one bit.
I think CI is more suited for this role than any other of the 'big' frameworks.
I want a real and honest opinion what do you think of Google Visualization API?
Is it reliable to use becasue when i was reading the documentation i noticed that there are alot of issues and defects to overcome and can i use it to retrieve data from mysql database.
Thank you.
I am currently evaluating it. As compared to other javascript data visualization frameworks, i think it has a lot going for it:
dynamic loading is built-in
diverse, many things to choose from.
looks really great!
framework mostly takes care of picking whatever implementation fits the current browser
service based, you don't need to download anything in advance
unified data source: just create one data table, and have multiple visalizations draw from that data.
As a disadvantage, I'd like to mention security. I mean, because it's all service based, it is not so transparent what happens when you pass data into these API calls. And as far as I know, the API is free, but not open source, so I can't really check what is going on behind the covers.
I think the Google visualization API really shines if you want to very quickly whip up a visualization gadget for use in a blog or so, and you are not interested in deploying all kinds of plugins and libraries (for eaxmple, with jQuery based frameworks, you need may need to manage multitple javascript libraries that work together to deliver the goods). If on the other hand you are creating an application that you want to sell, you might want to keep more control over what components you are using, and I would probably consider using something like Flot
But like I said, I am only evaluation atm, I am not using this in production.
Works really great for me. Can be customized fairly easily. Haven't seen any scaling issues. No data is exposed so security should not be an issue. - Arunabh Das
One point I want to add here is that, Google Visualization API cannot be downloaded, its not available for offline usage. So application which is going to use it must be always connected to internet, otherwise I think it wont be able to render charts. Due
to this limitation, this API cannot be used in some applications for which internet connection is not available.
I am currently working on a web based application that will have the Google Visualization API added to it and from the perspective of a developer the Google Visualization API is very limited in what you can do with each individual Chart and if I had a choice I would probably look at dojox charting just because of the extra flexibility that the framework gives you.
If you are doing any kind of large web application that will use charting extensively then I would not recommend the Google Visualizations API it does not have enough flexibility for a large web application.
I am using Google Visualization API and I want to stress that they still won't let you download it, which means if their servers are down, your app will be down if you depend on it. I have been using it for about 4 months, and they have crashed once me once so I'd say they pretty reliable and their documentation is really nice.