How long will my Mobile Analyzer scans be available in the database if I don’t delete them? - virus-scanning

When I am running scans using the AppScan Mobile Analyzer SaaS offering, how long will my scans be stored in the database if I do not delete them?

Scans will be held in the database for one year if the user does not manually delete them.

Related

Sybase to Oracle table Migration via Migration Wizard offline

How can I create a script of inserts for my sybase to oracle Migration? The Migration wizard only gives me the option to migrate procedures and triggers and such. But there is no select for just tables. When I try to migrate tables offline and move data. the datamove/ folder is empty. I would also want to only migrate specific tables (ones with long identifiers) because i was able to migrate the rest with Copy to Oracle.
I must also note that i do not want to upgrade to an new version of oracle. Currently on ~12.1 so i need to limit the identifiers.
How can I get the offline scripts for table inserts?
You (probably!) don't want INSERTs for offline migration scripts. If you're just running INSERTs, then the online method would probably suffice.
The point of the Offline strategy is to take the data from your Sybase instance to flat, delimited text files (using BCP), which we can THEN use to load back into an Oracle Database using SQLLDR or External Tables which will be EXPONENTIALLY faster than using INSERT scripts.
Take a look at this whitepaper where I go into offline Sybase migrations in detail.
You can consider DCO-based Sybase-to-Oracle replication via the Sybase Rep Server. This way, not only will you have all data moved, but you will also be able to have DML updates propagated online, which will make your system switchable live.

Why so slow returning data from Oracle external tables?

We are an ETL shop and make heavy use of external tables. Typically these tables are queried to populated staging tables. I am surprised at the time it takes to for queries to return data from the external tables.
Typically there is around a 15 second delay before any result is returned. This is true even in the cases when the data file contains no data and when the data file does not exist. The delay doesn't seem related to the number of rows in the file.
I am logging into the database server itself, on which the external table data files are located.
Is this expected behaviour?
File system operations (ls, vim) at least on smaller files happen with no delay.
All files on local disk.
Oracle 12.1.
Oracle Linux Server release 6.6
I would recommend reviewing or looking into release Oracle 12.2 notes. There was a Patch for both the Big Data Appliance Firmware (22911748) for Exadata and a fix made in 12.2.
It addresses a view that is specific to the access to external tables. It's possible that you are impacted by this view. The view name is LOADER_DIR_OBJS, which is used to query the directory that external tables point to.
Our customers are running into very similar issues, and Oracle recommended installing the 12.2 release which contains the patch.
So, we are currently testing the 12.2 release. Anytime an external table is read, it has to have access to the LOADER_DIR_OBJS system view. Typically, the poor performance comes from this view, which accesses the SYS.OBJ$ and SYS.X$DIR system object because query plan is not optimal. Some people have found work arounds. (See Oracle Workaround Document ID 2034938.1 to see if it applies to you).

Database Crawling in GSA

I could see there are two ways to index a database records in GSA.
Content Sources > Databases
Using DB connector
As per my understanding, Content Sources > Databases does not support automatic recrawl. We have to manually sync after any changes occured in DB records. Is that correct?
Also, Would using DB connectors help in automatic recrawl?
I would like to check DB in every 15 minutes for the changes and update the index accordingly. Please suggest the viable apporach to achieve this.
Thanks in advance.
You are correct that Content Sources > Databases does not support any sort of automated recrawl.
Using either the 3.x Connector or 4.x Adaptor for Databases supports automatic recrawls. If you are looking to index the rows of databases only and not using it to feed a list of URLs to index then I would go with the 4.x Database Adaptor as it is new.
The Content Sources > Databases approach is good for data that doesn't change often where a manual sync is acceptable. That said though, it's easy enough to write a simple client that logs in to the admin console and hits the 'Sync' link periodically.
However, if you want frequent updates like every 15m I'd definitely go with the 4.x plexi-based adaptor, not because it's newer but because it's better. Older versions of the 3.x connector were a bit flaky (although the most recent versions are much better).
What flavour DB are you looking to index?

How to Maintain a Common data over windowsphones?

I am creating an offline app,which have a table of data(SQL)..And that data must be in all the phone using my app...The Application must update this table whenever the phone gets connected to Internet.The Users can insert,update but cant delete their data in the table.
I want this table to be in a common place and store the table to Isolated Storage for offline usage .In my search I found that we can do this through OData with SQLAzure in cloud..I do want to know is there any other way of doing it?
You can use sql-ce dabatabase from mango which is stored in IsolatedStorage.
Here is good article.
For online updating I always use custom webservices running on my server, which is not that hard to develop.

Oracle11g Database Synchornization

I have a WPF application with back-end as Oracle11gR2. We need to enable our application to work in both online and offline(disconnected) mode. We are using Oracle standard edition(with single instance) as client database. I am using Sequnece Numbers for Primary Key Columns. Is there anyway to sync my client and server database without any issues in Sequence number columns. Please note that we will restrict creation of basic(master) data to be created only in server.
There are a couple of approaches to take here.
1- Write the sync process to rebuild the server tables (on the client) each time with a SELECT INTO. Once complete, RENAME the current table to a "temp" table, and RENAME the newly created table with the proper name. The sync process should DROP the temp table as one of its first steps. Finally, recreate the indexes and you should be good-to-go.
2- Create a backup of the server-side database, write a shell script to copy it down and restore it on the client.
Each of these options will preserve your sequence numbers. Which one you choose really depends on your skills. If you're more of a developer, you can make #1 work. If you've got some Oracle DBA skills you should be able to make #2 work.
Since you're on 11g, there might be a cleaner way to do this using Data Pump.

Resources