Why and When to use JSON [closed] - ajax

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I know what JSON is and what the advantages over XML. I already read some answer for this , but still i can't get through it.
So i would specifically ask this questions:
1. Is it only useful for API thing? so exchange data without refresh the whole page using AJAX..
2. Is it always used with AJAX?
3. Do people (always/very often) using JSON like this? :: Database/Server - JSON - Client.. what i mean by that is, all our data from database will be put into JSON, so people can use it easily to any other platform/language?
**because from my point of view, if the data, which we need to output not much, why not just directly write on HTML directly, and if it's a lot of data, why not use database? If you don't mind please add an example case to use json
big thanks everyone!

Because JSON is a lightweight data interchange format, it's uses vary widely. You describe using it for an API, which would be an idea situation to use JSON output over something like XML.
To specifically answer your questions:
It's not just useful for an API. It can be used to create configuration (for example, Composer's JSON configuration file). It can also be used for basic output to easily read with languages like JavaScript, since JSON is native to JavaScript as an object. (JavaScript Object Notation).
It's not always used for AJAX. Say you were building a PHP application to convert currency, and you wanted to read from an API that output as JSON, this would always be preferred. Because languages like PHP have the ability to encode and decode JSON, you could read from the API (or other source) and decode it, giving you a PHP object or array of the JSON data.
I think you mean reading from a database, outputting that in JSON format and then allowing clients to read it using an API. In this case, it's not always the way it's used - but if I had to guess, it's the most common way it's used, and probably most useful.

the JSON in my opinion, when you get some data from netWork , you can use the JSON to describe your data . JSON only is a data format. it isn't always used with AJAX . it's only a format. It contains array and dictionary.

Related

Is it wrong to use routes versus query string parameters? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have a Web API controller with two actions. One action returns a list of all entities from the database. The second action takes a query string parameter and filters the entities.
The search action is wired up to use the query string parameter. It works but we encountered an issue with a doc tool not working since the action signatures are the same (the doc tool does not take into account the query string).
Is it wrong to move the search parameter from the query string to an attribute in the route? Is this an acceptable approach? Will is cause problems I'm not thinking about?
Currently, this is a URL I use:
domain.com/api/entities?q=xyz
I'm considering moving to a route-based approach:
domain.com/api/entities/xyz
If you are implementing a search feature, or other type of feature where you need to have multiple optional parameters, it is better to use query string parameters. You can supply all of them, some of them, or none of them and put them in any order, and it will just work.
// Anything Goes
/controller/action?a=123&b=456&c=789
/controller/action?c=789&a=123&b=456
/controller/action?b=456&c=789
/controller/action?c=789
/controller/action
On the other hand, if you use URL paths and routing, a parameter can only be optional if there is not another parameter to the right of it.
// OK
/controller/action/123/456/789
/controller/action/123/456
/controller/action/123
// Not OK
/controller/action/123/456/789
/controller/action/456/789
/controller/action/789
It is possible by customizing routing to be able to pass optional values in any order, but it seems like a long way to go when query strings are a natural fit for this scenario.
Another factor to consider is whether the values being put into the URL have unsafe characters in them that need to be encoded. It is poor form and sometimes not feasible to encode the path of the URL, but the rules of what types of encoded characters that can be put into a query string are more relaxed. Since URLs don't allow spaces, it is a better fit for a multiple word text search field to be encoded with the space in the query string (preserving it as is) rather than trying to find a solution to swapping out the space with a - to put into the query string and then having to change it back to a space on the server side when running the query.
search = "foo bar"
// Spaces OK
/controller/action?search=foo%20bar (works fine and server is able to interpret)
// Spaces Not OK
/controller/action/foo bar/ (not possible)
/controller/action/foo%20bar/ (may work, but a questionable design choice)
/controller/action/foo-bar/ (may work, but requires conversion on the server)
Finally, another choice worth considering is using POST instead of GET, as that would mean these values wouldn't need to be in the URL at all.

Rails #raw or #html_safe methods are truncating text. Is there anyway around this? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I am new to Rails and am trying to put together a little app that will read cucumber results from a mongo database. The results stored in the document are parsed into html. In the rails app I am taking those results and displaying them using the raw() method call. The string that I get back is fairly large and as it turns out, the raw() method is truncating the text that I pass into it. When I output the text without raw() I get the entire string as expected (except that it has been escaped and not rendering as html).
My question is, is there any way to get around this? I really don't want to have to do the html conversion in the rails app or on the client. Both seem too costly. Especially when I can do it elsewhere and just store it in monogdb as an html string. Anyone have any ideas?
Thanks,
Jake
It turns out that there was a part of the string that was causing the rendering of the html to choke. Because cucumber syntax pass variables to Scenario steps using < >, there were places that <style> was written. Because is a valid open html tag, the html stopped rendering. I found this out by looking at the page source (where I was using the inspect element on the developer tools before). I saw that the whole html that I was expecting was in the source. I parsed through the text and used gsub to replace the <style> tag and all is working now.

How to generate a "canned response" with several variables [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm brand new to Ruby and programming. I'd like to create a little program to automate one of my more tedious work tasks that I'm currently doing by hand but I'm not sure where to start.
People register to take courses through an online form, and I receive their registration information at the end of each day as a CSV document. I go line by line through that document and generate a confirmation email to send to them based on their input on the online form: the course they'd like to take, their room preference, how much they chose to pay for the course (sliding scale), etc. The email ends up looking something like this:
Dear So and so, Thank you for signing up for "Such-and-such An Awesome Course," with Professor Superdude. The course starts on Monday, September 1, 2030 at 4pm and ends on Thursday at 1pm. You paid such-and-such an amount...
et cetera. So ideally the program would take in the CSV document with information like "student name," "course title," "fee paid," and generate emails based on blocks of text ("Dear , Thank you for signing up for _,") and variables (the dates of the course) that are stored externally so they are easy to edit without going into the source code (maybe as CSV and plain text files).
Additionally, I need the output to be in rich text, so I can bold and underline certain things. I'm familiar with Markdown so I could use that in the source code but it would be ideal if the output could be rich text.
I'm not expecting anyone to write a program for me, but if you could let me know what I should look into or even what I should Google, that would be very helpful.
I assume you're trying to put together an email. If so, I'd probably start with a simple ERB template. If you want to generate HTML, you can write one HTML template and one plain text template; variable substitution works the same way for both, with the exception that you'll need to html-escape anything that contains characters that HTML considers special (ampersands, greater than, less then, for example). See ERB Documentation here.
If you're trying to parse CSV, user FasterCSV or a similar library. FasterCSV is documented here.
If you want to send an email, you can use ActionMailer, the mail gem, or the pony gem. ActionMailer is part of rails, but can be used independently. Pony is a good facade for creating email, as well; both ActionMailer and Pony depend on the "mail" gem, so unless you want to spend more time thinking about how email formats work, use one of those.
If you're not trying to send an email, and instead are trying to create a formatted document, you can still use ERB, but use it to generate output in TeX, or if you're more adventurous than I am, a Word compatible XML document. Alternatively, if you're wedded to Microsoft Word or RTF, you might try either http://ruby-rtf.rubyforge.org/ (Ruby RTF) or use COM/OLE interop to talk to Word, but I would only do that if really I had to; if I had to go that route, I'd probably suck it up and just use the built in mail merge feature in Word perhaps with a little VBA code.

known services to validate csv file [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
Are there any good sites/services to validate consistency of CSV file ?
The same as W3C validator but for CSV ?
Thanks!
I recently came across Google Refine (now OpenRefine) - it's not a service for validating CSV files, it's a tool you download locally, but it does provide a lot of tools for working with data and detecting anomalies.
As mentioned in a reply, "CSV" has become an ill-defined term, principally because people don't follow the One True Way when using delimiter separated data
http://www.catb.org/~esr/writings/taoup/html/ch05s02.html
EDIT/UPDATE (2016-08-09):
CSV Currently Becoming a Well-Defined Term by the W3C CSV Working Group
The Open Data Institute is developing a CSV validation service that will allow users to check the structure of their data as well as validate it against a simple schema.
The service is still very much in alpha but can be found here:
http://csvlint.io/
The code for the application and the underlying library are both open source:
https://github.com/theodi/csvlint
https://github.com/theodi/csvlint.rb
The README in the library provides a summary of the errors and warnings that can be generated. The following types of error can be reported:
:wrong_content_type -- content type is not text/csv
:ragged_rows -- row has a different number of columns (than the first row in the file)
:blank_rows -- completely empty row, e.g. blank line or a line where all column values are empty
:invalid_encoding -- encoding error when parsing row, e.g. because of invalid characters
:not_found -- HTTP 404 error when retrieving the data
:quoting -- problem with quoting, e.g. missing or stray quote, unclosed quoted field
:whitespace -- a quoted column has leading or trailing whitespace
The following types of warning can be reported:
:no_encoding -- the Content-Type header returned in the HTTP request does not have a charset parameter
:encoding -- the character set is not UTF-8
:no_content_type -- file is being served without a Content-Type header
:excel -- no Content-Type header and the file extension is .xls
:check_options -- CSV file appears to contain only a single column
:inconsistent_values -- inconsistent values in the same column. Reported if <90% of values seem to have same data type (either numeric or alphanumeric including punctuation)
The National Archives developed a CSV Schema Language and CSV Validator, software written in Java. It's open source.
To validate a CSV file I use the RAINBOW CSV extension in Visual Studio Code and also I open the CSV file in Excel.
There is a great way to validate your CSV file.I am referring to this article, where the whole process is explained in tiniest details.
The validation process has two steps: the first one is to post the file to the API. Once your file is accepted,the API returns a polling endpoint that contains the results of the validation process.10 MB limit per file.
CSV Lint at csvlint.com (not .io :) is a service we're building to solve this problem. It checks CSV files against user-defined validation rules / schemas cell by cell.
We spent a lot of time tweaking the UI to allow users to create complex validation rules / schemas easily that meet their business needs without a single line of code.
Our offline validation feature allows users to see the results in-realtime even when validating multiple large size (with millions+ rows) files, and most importantly it 100% protects user data privacy.
Toolkit Bay CSV Validator & Linter online, easy to use, set delimiter and go.
Flatfile CSV validator online demo, automatic delimiter detection, upload and go.

i18n - best practices for internationalization - XLIFF, gettext, INI, ...? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
EDIT: I would really like to see some general discussion about the formats, their pros and cons!
EDIT2: The 'bounty didn't really help to create the needed discussion, there are a few interesting answers but the comprehensive coverage of the topic is still missing. Six persons marked the question as favourites, which shows me that there is an interest in this discussion.
When deciding about internationalization the toughest part IMO is the choice of storage format.
For example the Zend PHP Framework offers the following adapters which cover pretty much all my options:
Array : no, hard to maintain
CSV : don't know, possible problems with encoding
Gettext : frequently used, poEdit for all platforms available BUT complicated
INI : don't know, possible problems with encoding
TBX : no clue
TMX : too much of a big thing? no editors freely available.
QT : not very widespread, no free tools
XLIFF : the comming standard? BUT no free tools available.
XMLTM : no, not what I need
basically I'm stuck with the 4 'bold' choices. I would like to use INI files but I'm reading about the encoding problems... is it really a problem, if I use strict UTF-8 (files, connections, db, etc.)?
I'm on Windows and I tried to figure out how poEdit functions but just didn't manage. No tutorials on the web either, is gettext still a choice or an endangered species anyways?
What about XLIFF, has anybody worked with it? Any tips on what tools to use?
Any ideas for Eclipse integration of any of these technologies?
POEdit isn't really hard to get a hang of. Just create a new .po file, then tell it to import strings from source files. The program scans your PHP files for any function calls matching _("Text"), gettext("Text"), etc. You can even specify your own functions to look for.
You then enter a translation in the appropriate box. When you save your .po file, a .mo file is automatically generated. That's just a binary version of the translations that gettext can easily parse.
In your PHP script make a call to bindtextdomain() telling it where your .mo file is located. Now any strings passed to gettext (or the underscore function) will be translated.
It makes it really easy to keep your translation files up to date. POEdit also has some neat features like allowing comments, showing changed and dropped strings and allowing fuzzy matches, which means you don't have to re-translate strings that have been slightly modified.
There is always Translate Toolkit which allow translating between I think all mentioned formats, and preferred gettext (po) and XLIFF.
you can use INI if you want, it's just that INI doesn't have a way to tell anyone that it is in UTF8, so if someone opens your INI with an editor, it might corrupt yout file.
So the idea is that, if you can trust the user to edit it with a UTF8 encoding.
You can add a BOM at the start of the file, some editors knows about it.
What do you want it to store ? user generated content or your application ressources ?
I worked with two of these formats on the l18n side: TMX and XLIFF. They are pretty similar. TMX is more popular nowdays, but XLIFF is gaining support quickly. There was at least one free XLIFF editor when I last looked into it: Transolution but it is not being developed now.
I do the data storage myself using a custom design - All displayed text is stored in the DB.
I have two tables.
The first table has an identity value, a 32 character varchar field (indexed on this field)
and a 200 character english description of the phrase.
My second table has the identity value from the first table, a language code (EN_UK,EN_US,etc) and an NVARCHAR column for the text.
I use an nvarchar for the text because it supports other character sets which I don't yet use.
The 32 character varchar in the first table stores something like 'pleaselogin' while the second table actually stores the full "Please enter your login and password below".
I have created a huge list of dynamic values which I replace at runtime. An example would be "You have {[dynamic:passworddaysremain]} days to change your password." - this allows me to work around the word ordering in different languages.
I have only had to deal with Arabic numerals so far but will have to work something out for the first user who requires non arabic numbers.
I actually pull this information out of the database on a 2 hourly interval and cache it to the disk in a file for each language in XML. Extensive use of the CDATA is used.
There are many options available, for performance you could use html templates for each language - My method works well but does use the XML DOM a lot at runtime to create the pages.
One rather simple approach is to just use a resource file and resource script. Programs like MSVC have no problem editing them. They're also reasonably friendly to other systems (and to text editors) as well. You can just create separate string tables (and bitmap tables) for each language, and mark each such table with what language it is in.
None of those choices looks very appetizing to me.
If you're sending files out for translation in multiple languages, then you want to be able to trust that the encodings are correct, especially if you no one in your team speaks those languages. Sometimes it's difficult to spot an encoding problem in a foreign language, and it is just too easy to inadvertantly corrupt file encodings if you let your OS 'guess'.
You really want a format that declares its encoding. Otherwise, translators or their translation tools might select something other than UTF-8. For my money, any kind of simple XML format is best, but it looks like you'd need to roll your own in Zend. XLIFF and TMX are certainly overkill.
A format like Java's XML resources would be ideal.
This might be a little different from what's been posted so far and may not be exactly what you're looking for, but I thought I would add it, if for nothing else but a different approach. I went with an object-oriented approach. What I did was create a system that encapsulates language files into a class by storing them in an array of string=>translation pairs. Access to the translation is through a method called translate with the key string as a parameter. Extending classes inherit the parent's language array and can add to it or overwrite it. Because the classes are extensible, you can change a base class and have the changes propagate through the children, making more maintainable than an array by itself. Plus, you only call the classes you need.
We just store the strings in the DB and have a translator mode built into the application to handle actually adding strings for different languages.
In the application we use various tricks to create text ids, like
£("btn_save")
£(Order.class,"amt")
The translations is loaded from the db when the system boots, or when a reload is manually triggered. The £ method takes care of looking up the translated string according the the language specified in the user session.
You can check my l10n tool called iL10Nz on http://www.myl10n.net
You can upload po/pot files, xliff, ini files , translate, download.
you can also check out this video on youtube
http://www.youtube.com/watch?v=LJLmxMFxaxA
Thanks
Olivier

Resources