I am working on one real estate website which is Using RETS service to get the data to my local server.
but I have one little bit problem here,I can fetch data from RETS which is having about 3lacks record in RETS Database but I didn't find the way,How can I fetch that all records in bunch of 50k at a time ?
I didn't find any 'LIMIT' keyword on RETS.so how can I fetch without 'LIMIT' 50k records at a time?
Please help me.
RETS is not really much of a standard. It's more closely resembles a pseudo standard. It loosely defines an XML schema that describes real estate listings.
In version 1.x, the "standard" was composed of DTD documents. In 2.x, the "standard" uses XSD documents to describe the list.
http://www.rets.org/documentation
However, in practice, there is almost no consistency amongst implementers. Having connected to hundreds of "RETS Compliant" service providers, I'm convinced that not one of them is like any other one.
Furthermore, the 2.x "standard" has not changed in 3 years. It's an unmaintained, sloppy attempt at a standard. It (RETS) is often used as a business buzz word by non-technical people. In reality, it's just an arbitrary attempt at modeling real estate listing in XML.
Try asking the specific implementer for their documentation. Often, they don't have any. So, emailing the lead developer has frequently been helpful. Sometimes they'll provide a WSDL which will outline the supported calls. Often, the WSDL doesn't coincide with the actual service, so beware.
As for your specific question, try caching the results. Usually, the use of a limit on a RETS call is a sign of a direct dependency. As requests for your service increase, the load that your service puts on theirs will break (and not be appreciated). Also, if their service goes down (even temporarily), yours will be interrupted as well. Most importantly, it will make the live requests to your pages really, really slow (especially if their system is slow at the time). The listings usually don't change frequently enough for worries about stale data, so caching up to and hour is pretty acceptable.
Best of luck!
libRets provides support for generating a query with fetch limits:
http://www.crt.realtors.org/projects/rets/librets/documentation/api/classlibrets_1_1_search_request.html
But last I knew: I remember the company Intereality either ignored or outright didn't provide complete compatibility to RETS. Quickest way to know your dealing with them is that also thought making all "System" name's for table fields numeric.
If you're lucky, you're using a Rapattoni backed server and they do provide spec. compatible servers.
Last point, I can't for the life of me remember it's name, but I used to use a free Java based RETS tool to build valid queries ( included offset/limit clauses ) and that made it a tad easier to build automated fetchers for a client's batch processing system.
IN RETS if Count More Than limit then We can download using Batch form or we can remove that Limit using regex while downloading
Best way to solve Problem divide Data Count in small unit of download and while we have to consider download limit in mind Field for Divide that one in MLS/IDX I Suggest Modification Date and ListingDate
Related
I have a product that gathers and displays measurements of all kinds (won't go into it). The display portion is, as one would expect, a database + website built on top of it (with Symfony).
However, we'll probably be creating an API to expose the data to third-parties as well.
Now, we either have the choice of building both the website and the API on top of the database, or just build the API on top, and have the website implement the API.
I would greatly prefer the latter, since otherwise I'll have to adapt both model layers for the API and the website every time the schema changes (which can be a few times).
If I have the latter I obviously have the advantage of only adapting the API model. If the API contract stays the same, the website wouldn't need adapting.
However, obviously there is a downside in performance.
With website <-> database, vs website <-> API <-> database, the first will obviously be the fastest.
My question is: what is your opinion on this trade-off?
I'm hoping the performance can be almost evened out, since all the machines will be on the same LAN + there will be caching. If that's the case, the ease of development would certainly make my life easier :-)
Looking forward to your opinions and experience!
If there was ever a case of premature optimization, this is it! You're not going to know the answer without more information, and I suspect very much that the performance differences between the two will be so negligible as to be irrelevant in your domain.
The best approach, IMO, is to spike on a few of your models using both approaches and see where that gets you.
No better way to make sure your API is going to be usable by others than to use it yourself. I would go website -> API -> database. Write it once, you can always tune it and "cheat" later if you have too.
Many modern websites use JavaScript (AJAX etc) and then make service calls to an API. If you took that approach you would simply have a carefully designed, reusable API layer in front of your DB.
I find that there's little or no extra effort here, and I'm sceptical that you'll incur noticable performance penalties.
Systems have to sometimes accommodate the possibility of real world bad data. Consider that some data originates with paper forms. And forms inherently have a limited means of validating data.
Example 1: On one form users are expected to enter an integer distance (in miles) into a blank. We capture the information as written as a string since we don't always end up getting integer values.
Example 2: On another form we capture a code. That code should map to one of the codes in our system. However, sometimes the code written on the form is incorrect. We capture the code and allow it to exist with an invalid value until some future time of resolution. That is, we temporarily allow bad data since it's important to record the record even if some of it is invalid.
I'm interested in learning more about how systems accommodate bad data, that is, human error. Databases are supposed to be bastions of data integrity, but the real world is messy and people make mistakes. Systems must allow us to reflect those mistakes.
What are some ways systems you've developed accommodate human error? What practices have you used? What lessons have you learned?
Any further reading on the topic? (I had trouble Googling it.)
I agree with you, whatever we do there's no guarantee that we can get rid of bad or incorrect data. Especially, but not only, if it comes to user input. In my experience the same problems exist in complex integration projects, in which you have to integrate and merge (often inconsistent) data retrieved from different systems.
A good strategy is to decouple the input from the operational system itself. First, place user (or external system) provided data in a separate datastore (e.g. different schema). In a second step load this data into your operational datastore, but only if it confirms to strict rules (e.g. use address verification software to verify a given address). This Extract, Transform, Load (ETL) approach is fairly common in Data Warehousing (DWH) solutions, but can be applied programmatically in transactional systems as well (in my experience).
The above approach often leads to asynchronous processes in which the input is subitted first and (maybe) at a later time the external entity (user or system) retrives feedback whether its data was correct or not.
EDIT: For further readings I recommend to have a look at DWH concepts. Alhtough, you may not want to build such a thing, you could partially apply those concepts:
http://en.wikipedia.org/wiki/Extract,_transform,_load
http://en.wikipedia.org/wiki/Data_warehouse
http://en.wikipedia.org/wiki/Data_cleansing
A government department I worked in does a lot of surveys, most of which are (were) still paper based.
All the results were OCR'd into the system.
As part of the OCR process a digital scan of the forms is kept.
Data is then validated, data that is undecipherable or which fails validation is flagged.
When a human operator reviews the digital data they can modify the data if they are confident that they can correctly interpret what the code could not; they (here's the cool bit) can also bring up the scan of the paper based original, and use that to determine what the user was trying to say.
On a different thread; at some point you want to validate the data coming in against any expected data ranges that you want it to conform to; buy rejecting it at the point of entry you give the user a chance to correct it - the trade off is that every time you reject it you increase the chance of them abandoning the whole process.
At some point in your system you need to specify the rules which will be used for validation. At the end of the day a system is only going to be as smart as those rules. You can develop these yourself into the code (probably the business logic) or you might use a 3rd party component.
having flexible control over the validation is pretty important as they are likely to change overtime.
To be honest with you, one point of migrating from paper-based systems to IT is to remove these errors and make sure all data is always correct. I doubt any correctly planned and developed IT system (especially business financial systems) would allow such errors. Not in the company I am working for anyway...
There are lots of software tools that address the kinds of problems you mention. There are platforms and tools that let you define rules for scrubbing and transforming data and handling validation errors. Those techniques are widely used for Data Integration and Business Intelligence applications. Google for "Data Quality" or "Data Integration".
The easiest thing to do is to (this is not always possible) design the interface where users enter the data to limit as much as possible the amount of text that they need to enter. In my experience this seems to be where a lot of problems come from. One simple example of this is to provide a select, or auto-complete select field
One thing that you could do is do everything possible to determine if the data is correct before going into the db. I try to give the user entering the data as much feedback as possible so they can (ideally) fix some of the issues before the data gets persisted. For example, it is a very quick check to determine if the data being entered is of the correct type.
I got started in legal systems before the PC era. Litigation support databases routinely have to accommodate factually incorrect, incomplete, and contradictory information. It takes a different way of thinking.
The short version . . .
Instead of recording a single fact, you record multiple assertions about a fact. It boils down to designing a database to store data from assertions like these.
In an interview at 2011-01-03 08:13, Neil Rimes told Officer Cane
that he was at home from 2011-01-02 20:00 until 2011-01-03 08:13.
In an interview at 2011-01-03 08:25, Liza Nevers told Officer Cane
that Neil Rimes came home at 2011-01-02 23:45.
In a deposition at 2011-05-13 10:22, Cody Maxon told attorney Kurt
Schlagel that he saw Neil Rimes at Kroger at 2011-01-03 03:00
What are some of the common and notable performance issues/bottlenecks that are typically encountered in a web application in both, the front-end layer, and the back-end layer?
An example of what I mean in a database is not having something you are querying on be an index. That would slow down the query. On the front-end it might be something funky going on with JavaScript that makes your application seem slow.
What are the general rules of thumb that help navigate such issues? And what are some good to-do's?
Thanks,
Alex
On front-end:
-push all of your assets - css files, images, static content - to a CDN. Edgecast is pretty good and reasonably priced.
-don't use load entire javascript frameworks when you only need a few features from it. only load what's needed.
On back-end
-memcache the results from all database calls by using a hash of the sql query as the key name, and the result set as the value
-make sure you are not making your database tables really 'wide' - tons of columns and column types like 'text' and 'blob'
For the front-end, there are well-known guidelines/rules you can follow, and there are some great tools like YSlow that can help you pinpoint the bottlenecks.
For the back-end, as you've noted, efficient use of indexes is a must. Other optimizations usually involve caching, and basic stuff like avoiding doing stuff within loops that can be done once. I'm sure people here will have suggestions, but remember "premature optimization is the root of all evil!" :-)
Millhouse is on to it. I can also add:
Batch expensive operations up. For example: don't make lots of individual calls to a database if you can do it all in one hit.
Avoid server hops where you can.
Process in parallel if you can (not so common for your 'average' web app but quite possible in larger Enterprise scale apps).
Pre-process: crunching data, pre-puiblishing content etc, the more you can do before it's needed the better.
Use a CQRS-based architecture. CQRS stands for Command/Query Responsability Segregation; it basically means that you have different code (services) for reading from the DB and writing to the DB. A good practice for scalability is to have separate DB's for reading and writing (it actually does make sense, if you read more about CQRS), and you can scale out the reading database by having copies run on multiple servers.
CQRS is not only interesting from a scalability point of view, but also from a code maintenance and clarity point of view. It does take some effort to learn about CQRS and understand it, though.
Check out these links:
http://www.slideshare.net/skillsmatter/ddd-exchange-2010-udi-dahan-on-architectural-innovation-cqrs
http://www.slideshare.net/pjvdsande/rethink-your-architecture-with-cqrs
convert dynamic contents to static contents. regenerate those static contents if their dependent objects changed. I saw one article said that more than 80 percent contents are static on Amazon website.
I am adding some indexes to my DevExpress TdxMemDataset to improve performance. The TdxMemIndex has SortOptions which include the option for soCaseInsensitive. My data is usually a GUID string, so it is not case sensitive. I am wondering if I am better off just forcing all the data to the same case or if the soCaseInsensitive flag and using the loCaseInsensitive flag with the call to Locate has only a minor performance penalty (roughly equal to converting the case of my string every time I need to use the index).
At this point I am leaving the CaseInsentive off and just converting case.
IMHO, The best is to assure the data quality at Post time. Reasonings:
You (usually) know the nature of the data. So, eg. you can use UpperCase (knowing that GUIDs are all in ASCII range) instead of much slower AnsiUpperCase which a general component like TdxMemDataSet is forced to use.
You enter the data only once. Searching/Sorting/Filtering which all implies the internal upercassing engine of TdxMemDataSet it's a repeated action. Also, there are other chained actions which will trigger this engine whithout realizing. (Eg. a TcxGrid which is Sorted by default having GridMode:=True (I assume that you use the DevEx. components) and having a class acting like a broker passing the sort message to the underlying dataset.
Usually the data entry is done in steps, one or few records in a batch. The only notable exception is data aquisition applications. But in both cases above the user's usability culture allows way greater response times for you to play with. (IOW how much would add an UpperCase call to a record post which lasts 0.005 ms?) OTOH, users are very demanding with the speed of data retreival operations (searching, sorting, filtering etc.). Keep the data retreival as fast as you can.
Having the data in the database ready to expose reduces the risk of processing errors when you'll write (if you'll write) other modules (you need to remember to AnsiUpperCase the data in any module in any language you'll write). Also here a classical example is when you'll use other external tools to access the data (for ex. db managers to execute an SQL SELCT over the data).
hth.
Maybe the DevExpress forums (or ever a support email, if you have access to it) would be a better place to seek an authoritative answer on that performance question.
Anyway, is better to guarantee that data is on the format you want - for the reasons plainth already explained - the moment you save it. So, in that specific, make sure the GUID is written in upper(or lower, its a matter of taste)case. If it is SQL Server or another database server that have an guid datatype, make sure the SELECT make the work - if applicable and possible, even the sort.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I work for a CMMI level 5 certified company and one thing I hate about is the amount of documents we prepare (As a programmer I already hate documents). We have lots and lots of documents like PID(project initiation doc), Business requirements, System requirements,tech spec, Code review checklist, issue logs, Defect logs, Configuration management plan, Configuration management check list(s), Release documents and lots...
Almost 90% of these docs are just done for the sake of QA audit :) .. What do you think are the most important documents for a project? What documents can be used in the long run by another developer?
Please share your good practices here. I would like to use them for my own projects or the company I am planning to start in the long run.
Thanks
The key document is a good functional spec. There should be one and only one reference document for a system.
Overdoing documentation proliferates a large number of small requirements and spec documents every time someone changes a system or interface. For a system of any complexity, before long you have your spec distributed around several hundred assorted word, excel, visio and even powerpoint files. When this happens you lose clarity about what is current or even whether you have located and identified all pertinent documentation.
The BRD-SRD-Tech spec progression is based on an assumption that the business signs off the BRD, a business analyst signs off the SRD against requirements documented in the BRD and the technical specification is signed off against the SRD. This generates a web of sign-offs, multiple documents with redundant information and makes it difficult and clumsy to keep the spec documents up to date.
Because of this, subsequent requirements documentatation tends to take the form of a series of change request and supplemental requirement and spec docs, each with their own sign-off and audit process. You gain CYA and audit trail (or at least the appearance of an audit trail), but you lose clarity. There is now no definitive reference document for the system and it is difficult to establish what is current or relevant to any particular activity. The net result is that your business analysis process gets bogged down in forensic research, which adds overheads and latency to delivery schedules.
A spec document should be built in such a way that there is one definitive reference for any given system or subsystem. The document should be kept up to date and versioned. Get a good technical documentation tool like Framemaker, so your process can scale, and the document has some structural integrity of the sort lacking on Word.
For me the only real document I ever use is a spec. The more detail the better. However it doesnt need to be all completed at one time, and it doesnt need to be particularly formal. What is far more useful to me than documents that are checked and signed and double checked and double signed is always being able to get the latest version of a document. And being able to talk to people about what they have written, and get a decision in the case of any ambiguity. this is far more useful to me than anything else.
To sum up: a spec is the only document I have ever found useful, however it pales in comparison to having a project manager who knows the proposed system inside out, and can make sensible decisions based on what they know.
Documentation is like tofu -- most people hate it until they realize that under the right conditions, it can be really good.
The problem is that what you consider documentation is mostly made for documentation's sake. You, as a developer, don't see any immediate value in the documents you produce because you know you can do your job without all the TPS reports which you're required to make.
Unfortunately, I'm going to wager that there's not a lot you can do about in a company where you're being forced to eat raw tofu all the time. You'll probably just have to suck it up and write the docs which your company requires, but you can at least do one thing... you can write documents which at least are useful to you, and you can pass them along with your code for others who will maintain it.
Aside from inline documentation, you could set up a wiki to be used by yourself and people on your team. This type of documentation is searchable, which is already a big plus to developers, plus it's more of a living document instead of a homework-like paper you had to write. You already post to SO, so just think of your documentation as pooling your knowledge in a more useful place.
What do you think are the most important documents for a project?
Different people have different needs: for example the documents which the owner needs (e.g. the business contract) aren't the same as the documents which QA needs.
What documents can be used in the long run by another developer?
IMO the most important document (except for the source code) is the functional specification: because what the software is supposed to do (as opposed to, what it is doing) is the one thing that can't necessarily be reverse-engineered. See also How does a good developer keep from creating code with a low bus hit factor?
User Stories, burndown chart, code
I'm a fan of the old 4+1 views:
Use Case view (a/k/a user stories). There are several forms: proper use cases, forward-looking use cases that aren't as well defined and epics which need to be decomposed.
Logical view. The "static" view. UML Class diagrams and the like work well here as a design document. This also includes request and response formats for various protocols. Here is where we document the RESTful requests and responses. This includes the REST URI design.
Process view. The "dynamic" view. UML activity diagrams, sequence diagrams and statecharts and the like for here for design documents. In some cases, simple narratives work well. In other cases, there's a State design pattern, and it requires a combination of class diagrams and statecharts to show how the stateful objects interact.
This also includes protocols (e.g. REST). Here is where we define any special processing for the various REST requests.
This also includes an authentication or authorization rules, and any other cross-cutting aspects like security, logging, etc.
Component view. The pieces we're building for deployment. This includes the stuff we depend on, the structure of the modules and packages, etc. This is often a simple component diagram or a list of components and their dependencies.
Deployment view. We try to generate this from the code as deployed. Since we're using Python, we use epydoc to create the API documentation. We also use Sphinx to import module documentation into this view of the software.
This also includes the parameters, settings, and configuration details.
This, however, isn't sufficient.
When projects start, you have to work up to this through a series of sprints.
The first sprints build just the use case view.
Subsequent sprints build an "architecture" to implement the use cases. The architecture document has 4+1 views, but at a high level of abstraction. It summarizes the structure of the model schemas, the requests and replies, the RESTful processing, other processing, the expected componentry, etc. It never has a Deployment view. We generally reference operator guide and API documents as the deployment view of an architecture.
Then design-and-construction sprints build (and update) detailed 4+1 view documents for various components.
Then release sprints build (and update) the deployment views.
From the project point of view, the most important documents are those that normally include the word Plan, such as the Project Plan, Configuration Management Plan, Quality Plan, etc.
What you are describing is common in process improvements, and normally responds to two major causes. One is that the system really is overeaching and getting in the way of real work being done. Another is actually answered in your question: it is not that the documents are only done for the sake of audits, and your focus should not just be how usefull is the doc for other developers, but for the project or the company as a whole.
One usually looks at things from it's own perspective, sometimes it's necessary to look at the general picture.