Oracle Applications - programmatically manipulate form data - oracle

Oracle applications 11.5.7 with Forms Server 6 - is there any way to programmatically send data from the client to the server, as if they were coming from the client forms?
This is an old version but am stuck with it for now.
I guess this is an Oracle internal protocol which details how a client JVM sends data across the network to the forms server to get it to accept user input. But can I simulate this, at least for simple data input - my goal would be to get mass data input done in a programmatic fashion rather than having to use either the Oracle forms or get an Oracle specialist to do a custom-loader for me, which is likely to be exorbitantly expensive.
I looked already at "generic" input loaders for BOMs, routings, costs etc, such as apps4more.com, but they won't support 11.5.7. The only one I found so far which would support 11.5.7 was exorbitantly expensive.

Have you tried Forms Playback?
You can record Forms actions to file and playback it later. It can be useful if you just need to redo the same activity on diffrent instances. File is plaintext, but unfortunatly format is quite difficult and it is not easy to generate it by means other than record...

Related

Can effective database replication be done through an asynchronous messaging system?

Given a pre-production oracle database and a production oracle database and if around 300K records need to be transferred from the former to the latter, would using a messaging system such as an ESB/JMS/TIBCO be a good option?
I don't know Oracle, but if I was trying to asynchronously replicate data with SQL Server, I would use their own internal tools to accomplish it. I would imagine Oracle has similar tools to run jobs to copy between two Oracle databases.
However, I do have quite a bit of experience using an ESB (Mule) with ActiveMQ to replicate data across database technologies. Specifically I've done SQL Server->Mongo and MySQL->Mongo with Mule and ActiveMQ.
So far I've found Mule to be a wonderful solution - especially coupled with ActiveMQ. I've been able to replicate about 400k Wordpress blog posts (from MySQL) to Mongo in about 20 minutes. To transfer 100k articles from a CMS system we were able to get it done in about 30 minutes.
I figured I'd weigh in because you mentioned and ESB and messaging. I would go that route if the integration points are heterogenous. If you do go down that route, Mule is awesome.
If you are trying to move data from an old database to a new one instead of doing it asynchronously, possibly a simpler method would be sql injection. Assuming your old database allows you to "export" your database, when you export it you will download a sql file. Then you can just open that sql file in a program like notepad and copy-paste that code in the sql executor at your new database and it will re-create all your tables and populate them with the old data.
Actually using the database tools will be the recommended method for replicating data between databases.
When using messaging, one does not get the guarantee that the data will be transferred in the same sequence as it was sent and honor relationships between tables, potentially resulting in replication errors, unless one builds up some mechanism on the JMS receiver side to maintain the sequence. But that looks rather like an overhead.

DataBase for Metro style apps in windows 8 Development?

Hi i am developing a metro style application where i will be connecting to web services and get the data from the web service and i will be binding it to the UI.
But my requirement is in my application i need to create tables and also provide relation between the tables and dump the data in to that local database and use that data in calling the other methods present in my service application (calling one more method in services by passing something as an input to that).
Can you please explain me the following :-
What is Database which is supported for metro style apps?
How can i create a database and create tables and dump the the data which i got as the response from my service application?
I am new to Metro style application development please help me out .
Thanks in Advance.
First of all WinRT has very poor db support. Most of this kind of things are done by web services, OData etc
BUT I`m almost 100% sure that you can use SQLite. On codeplex there are connectors from Win8 Metro app to SQLite DB so check this topic. I also saw somewhere on MS page that SQLite is support in some way. Check it
What Fixus said is correct. Personally , since my app doesnt have a large amount of data to store locally (it goes against the Metro guidelines to store large amounts of data) I serialize the objects instead to local storage. When needed, and if internet is available, the services will be called and the local data updated.
If you choose to use SQLlite make sure you use the real deal and not a third party db, as the DB library must be approved by Microsoft if you want to get the app accepted to the windows store. I'm not even sure that SQLite is yet approved, but by the looks of it they will be.
Tim Heuer always writes great articles on the subject, this one might help you
Let me know if you need help with serializing in WinRT, if you need it.
Best of luck!
We recommend using SQLite database with LinqConnect - Devart's LINQ to SQL compatible solution which supports SQLite engine (provided by http://code.google.com/p/csharp-sqlite/). You can employ LINQ and ADO.NET interfaces with our product. Starting from the 4.0 version, LinqConnect supports Windows Metro applications: http://blogs.devart.com/dotconnect/linqconnect-for-metro-quick-start-guide.html.
If you're building some application that has to keep working without any network connection, and needs to synchronize at some point in time, it is necessary to keep a local database.
You can read the following article, which has some basic guidelines and samples.
http://blogs.msdn.com/b/win8devsupport/archive/2013/01/10/using-database-in-windows-store-apps-i.aspx

Best way to get database information to a program (windows and mac)

I'm using delphi at the moment and have a program that connects to another program (a server) which has the mysql database on it and sends the data back to the client. I have a web server that has the server program and the database but my question is can I just go straight from the client program I have made (windows and future mac) to the mysql database on the web server? Or do I really need the server program? If so, what do I need to do to connect my client program to the MySQL database over the internet?
You should be able to access the mysql database directly as long as you've created a user/pw combo for the database that allows remote access (Security discussion aside). You'll then want to search for a compatible mysql library that would ease the communication between your program and mysql. At the far technical end you might have to read/write directly to the mysql socket but that's possible as well.
Depends on whether your client programs will continue to be native applications or whether you plan to migrate to browser based clients.
If they're native applications you can obtain library components for the languages they're written in which will be able to communicate directly with the MySQL database. There are plenty of options for Delphi; I'm not familiar with what options might be available for native Mac development (but, of course, Embarcadero is in the process of rolling out a Delphi that can generate Mac applications).
If, however, you're planning on making your clients browser-based, ajax solutions want to talk to a web server rather than a database server. In that case, you will need to maintain your middleware. For a discussion of whether it's possible or desirable to have a browser based application communicate directly with a database server see this question.
I would use SOAP/XML for this, and leave the SQL out of the client entirely.
This is a typical use case where REST (for example using JSON encoded database records) can be helpful. It is easy to implement a Delphi client using lkJSON or SuperObject, to put the database records from the HTTP response into a TClientDataSet.
Yes, it's possible, but is it a good idea?
here's a basic discussion of 2 tier v 3 tier architecture

Write to shared txt file or DB table from web service?

I am developing a web service that will be invoked (using JSON) from client side each time the selection of a drop down changes.
The goal is to register each "intermediate" change (on client side) using the "OnSelectedIndexChanged" event and before submitting the form to the Server.
Each new selected value will be written to a shared txt file calling a relative web method via Ajax/JSON.
Would it be better to write these changes to a txt file (having to implement a lock/unlock policy to assure exclusive access) or rather define a DB table and save the changes there?
Everyday the web app will have around 10 to 20 active users that might potentially changes the DropDownLists and usually the right value will be selected at first, hence generally no more than one "intermediate" entry would be registered.
Thanks.
Don't use the filesystem. It's slow. Use mongodb via a node.js webserver.
http://howtonode.org/express-mongodb
Good Luck!
This sounds exactly like what you would want to use a database for, since ACID is already implemented there.
If you want a real headache (and a programming challenge!) trying to debug overlapping writes, resource starvation and deadlocks, by all means, go with a shared text file!

Core Data's Limits, can Core Data be used as a Serverside Technology?

I've found no clear answer so far, but maybe I've searched the wrong way.
My Question is, can Core Data to be used as a Persitence Storage for a Server Project? Where are Core Data's Limits, how much Data can be handled with Core Data and SQLite? SQLite should handle a lot of Data very well according to their website. I know of a properitary Java Persitence Manager with an Oracle DB as Storage that handles Millions of Entries and 3000 Clients without Problems. For my own Project I wonder if I can use Core Data on the Server Side for User Mangament and intern microblogging, texting with up to 5000 clients. Will it handle such big amounts of Data or do I have to manage something like that myself? Does anyone happend to have experience with huge amounts if Data and Core Data?
Thank you
twickl
I wouldn't advise using Core Data for a server side project. Core Data was designed to handle the data of individual, object-oriented applications therefore it lacks many of the common features of dedicated server software such as easily handling multiple simultaneous accesses.
Really, the only circumstance where I would advise using it is when the server side logic is very complex and the number of users small. For example, if you wanted to write an in house web app and have almost all the logic on the server, then Core Data might serve well.
Apple used to have WebObjects which was a package to manage servers using an object-oriented DB much like Core Data. (Core Data was inspired by a component of WebObjects called Enterprise Objects.) However, IIRC Apple no longer supports WebObjects for external use.
Your better off using one of the many dedicated server packages out there than trying to roll your own.
I have no experience using Core Data in the manner you describe, but my understanding of the architecture leads me to believe that it could be used, depending on how you plan to query and manipulate the data.
Core Data is very good at maintaining an object graph and using faults to bring parts into memory as needed. In that manner, it could be good on a server for reducing memory requirements even with a large data set.
Core Data is not very good at manipulating collections of objects without loading them into memory, making a change, and writing them back out to disk. Brent Simmons wrote a blog post about this, where he decide to stop using Core Data for some of his RSS reader's model objects because an operation like "mark all as read" didn't scale. While you would like to be able to say something like UPDATE articles SET status = 'read', Core Data must load each article, set its status property, then write it back to disk.
This isn't because Apple engineers are stupid, but because the query layer can't make assumptions about the storage layer (you could be using XML instead of SQLite) and it also must take into account cascading changes and the fact that some article objects may already be loaded into memory and will need to be updated there.
Note that you can also write your own storage providers for Core Data, see Aaron Hillegass's BNRPersistence project. So if Core Data was "mostly good" you might be able to improve on it for your application.
So, a possible answer to your question is that Core Data may be appropriate to your application, as long as you do not need to rely on batch updates to large number of objects. In general, no algorithm or data structure is appropriate for every scenario. Engineering is about wisely choosing between trade-offs. You won't find anything that works well for many clients in every case. It always matters what you are doing.

Resources