Suggestions for an Oracle data modeler that can reverse engineer and handle very large databases [closed] - oracle

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
By very large, I mean on the realm of thousands of tables. I've been able to use Toad Data Modeler to do the reverse engineering, but once it loads up the txp file it creates, it just croaks. Even attempting to split up the model isn't possible as TDM just sits, frozen.
So, I was wondering what other options are out there, and perhaps if there are any 32 bit applications that can handle such a database model (considering the memory used by this is ~750MB, I would think it not too large for a 32 bit computer with max RAM).
Also to note, I am not trying to create a diagram with this (such a huge diagram would be effectively useless unless you already knew the system), but am instead needing to export the design of the database. So the data model tool doesn't need to support any sort of fanciful graphics, which may not be possible with the given size anyways.
Edit:
I've found a potential solution which leads to TDM working. You have to close the project, close TDM, open TDM, and then open the project. If you just kill the process while it is frozen, this will not work. What this does is zoom the screen showing the graphical representation to the normal view level, while normally after reverse engineering, the entire database is put into the view (if you just kill the process, when you open up the file again, you will see the entire database). While I am not certain the details, it appears being zoomed it makes it so that TDM runs much smoother and does not freeze or crash, and as such I am able to keep working in it to do what I need.

How about Oracle's own SQL Developer Data Modeler?

Related

Calling antiviruses from software to scan in-memory images [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Our system periodically (several times a minute) calls an external service to download images. As security is a priority, we are performing various validations and checks on our proxy service, which interfaces with the external service.
One control we are looking into is anti-malware which is supposed to scan the incoming image and discard it if it contains malware. The problem is that our software does not persist the images (where they can be scanned the usual way) and instead holds them in an in-memory (RAM) cache for a period of time (due to the large volume of images).
Do modern antiviruses offer APIs that can be called by the software to scan a particular in-memory object? Does Windows offer a unified way to call this API across different antivirus vendors?
On a side note, does anybody have a notion of how this might affect performance?
You should contact antivirus manufacturers - Some of them do, but you probably find it tricky to find out the pricing even.
Windows has AMSI which has a stream interface and a buffer interface. I am unaware if it makes a copy of the data in the buffer or scans the buffer as it is.
And it will absolutely wreck your performance, probably.
What might be faster would be to just have some code to assure that they are in fact images that can be read and re-encoded, but then there are obvious problems with re-encoding .jpg images, so maybe just sanity check the header and data with them. This could also be slower. decoding large images is slow, but it would probably catch 0 day exploits targeting libpng/libjpeg better.
Also you could read some horror stories of scanning servers like that being targets of malware in otherwise benign files, though the last one I remember is from last decade.

Website Performance Issue [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
If a website is experiencing performance issues all of a sudden, what can be the reasons behind it?
According to me database can one reason or space on server can be one of few reasons, I would like to know more about it.
There can be n number of reasons and n depends on your specification
According to what you have specified you can have a look at,
System counters of webserver/appserver like cpu, memory, paging, io, disk
What changes you did to application if any, were those changes performance costly i.e. have a round of analysis on those changes to check whether any improvement is required.
If system counters are choking then check which one is bottleneck and try to resolve it.
Check all layers/tiers of application i.e. app server, database, directory etc.
if database is bottleneck then identify costly queries and apply indexes & other DB tuning
If app server is choking then, you need to identify & improve the method which is resource heavy.
Performance tuning is not a fast track process, it takes time, identify bottlenecks and try to solve it and repeat the process until you get desired performance.

Access query calculation Vs. Excel calculation where access query is data source [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Setup is an Access back-end on a network drive with a query as data source for an Excel table. I want to know if it is better to perform complex calculations in Excel after the data has been imported vs having the calculation in the query itself.
For example:
The db collects quality control information with individual records for every component of a lot. One calculation checks that all components of each lot have been recorded and if so checks that the most recent component has been entered before the scheduled completion time.
Obviously this is a fairly intensive calculation in excel which leads to significant calculation time after the data has been imported. (It's very possible that the calculation isn't as efficient as it could be!!)
So what I'd like to know is if the access query would be more or less efficient at doing this calculation (bearing in mind that the file is on a network drive).
I hope all that makes sense but if not let me know and I will try to clarify!
Thanks.
There is not a general rule for which platform will be faster. It depends on the size of the task and the method chosen to implement it.
MSAccess is absolutely great at collating information, but that is because it is flexible and systematic, which helps prevent errors. Not because collating information is fast. There is no general rule that says collating information will be faster in MSAccess, Excel, SQL Server or C#.
If you are using a code loop to compare all cells, that can take a long time however you do it. Post the code here to see if there is a suggestion on how to convert it to calculated cell expressions. To make Excel fast, you need to use calculated cell expressions.
If you aren't using a code loop, are you sure you aren't actually waiting for the database access?

Reuse existing database, or use a new one? (for application performance) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have an existing Oracle 11g DB, with a high transaction volume application running on it. I have another application (a CMS), and am not sure if, performance wise, it makes sense to reuse the existing Oracle DB, or go with a separate database on another physical machine. The two apps share no common data.
My question is: does Oracle 11g (Enterprise) have features which would allow two entirely separate data sets to be accessed simultaneously, with the only performance limitation being the physical/virtual server resources available?
This question doesn't apply because my data sets are completely unrelated (and they're on MySQL). I checked out Oracle's suggestions for application performance, but this paper doesn't address optimizing performance for separate applications with separate data sets running on the same database.
The direct answer to your question is: no, Oracle doesn't have features to do that kind of separation if don't consider ANY kind of change in you infra-structure.
As far as I can see, your options, with Oracle, would be:
1) Single instance.
1.1) Just one node (your the case now, right!?). Oracle Enterprise scales adding nodes so, this option won't scale and the two schemas/data sets in same database will get in each other way.
1.2) Add more nodes. You can add more nodes to share load (using RAC). Administration would be more complex and licensing costs would go up. But in this case, scalability is only limited by your budget.
2) Two separate instances in separate machines. Equivalent to using a new database in MySql (not minding about the differences in capabilities and pricing).
MySql is inferior to Oracle in many ways but clearly superior in setup costs. Not so sure about maintenence/development costs.

What is the best way to break down and follow project progress? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Hi
When i wanna start a new project, I have enough details to just start it. And as all programmer needs, I need to analyse that project to understand how to write codes and classes and relations between them...
Normally, I do it on so many papers and its really annoying and also I can't consentrate so good (in huge projects).
I wanna know, what is the best way (or tool) to write implementation and designing steps to analyse, break down and follow project progress?
Thanks
I strongly recommend PowerDesigner from Sybase.
You can build requirements documents and link each requirement to classes. You can generate a physical data model straight from your class model. It supports a wide variety of RDBMS's. There's a 15 day fully functional trial at the link above.
If the project is huge, there's plenty of budget for this tool. It's a lifesaver. The ROI is self-evident.
i suggest VS 2008 > Class Designer, a handy tool, it writes clases behind class diagram and also has tools to help analyze archtecture.
This is incredibly easy. Use yellow sticky notes on a white board or large sheet of white cardboard.
Treat each sticky note as a seperate process. For decisions trun the sticky note so it looks like a diamond. This way you can move them around until it's right.
You can also split a complicated sticky not into two or three sticky notes. If you know that something needs to get done but don't know what that something is simply write "Process goes here ask (Marketing or Compliance)"
I've used this many, many times and it's very cost effective.

Resources