I need to execute a script (some sort of macro or VB script) that posts data to a different server every time a record is added or changed on a local MS Access database. This is trivial on a 2007 or later .accdb file with After Events but I need to accomplish the same effect on a 2000 formatted .mdb file. Does anyone know of a way to simulate an 'after create' event on this older database type?
Related
We have SAS datasets, for which many people have access to read and write. Many a times user click those tables and open. Table gets locked. To circumvent this problem, I tried to created views in same library, if people double click the view it opens table and locks the table again.
One solution I am thinking of to create view in new library with access=read only option.
Is there read only view option, where in someone double clicks and table does not lock the table. Is it possible to create this view in same library.
I also had to deal with this problem in an environment where we didn't have SAS/SHARE. My solution was to write a batch job that ran at regular intervals doing the following:
Divert the log to a text file.
Attempt to lock the table using a lock statement.
Release the lock immediately if successful.
Parse the log file using a data step.
Extract the usernames of anyone locking the table.
Send an email to all users of the table notifying them that user X was locking it.
Updates to the table only took a fraction of a second each, so although it was possible to catch someone making a legitimate update (or prevent them from doing so), this was very unlikely.
I suggest the best way around this is to create a simple 'data viewer' web application. If you have a mid-tier and a stored process server then you are ready to go, it should only a couple of hours if you have basic javascript / html knowledge.
I wrote a detailed guide for building web apps using SAS in this sgf paper, and a quick summary in this blog post.
The hard part will be convincing your users to use the web app instead of client tools for reading the data!
In the long term it is really best to avoid using SAS datasets and use an actual database instead.
You can create views for those datasets in the same library, but save them to a new SAS folder and give the users read only access to the folder & views. And educate your users about SAS table locks so that they wont get put off if they see lock errors.
If you want users to able to write to those tables, then I recommend having a control framework or process in place.
Example Process:
Users have to submit their code or the data that they want to add / edit,
As an admin you apply those changes in batches / once a week or a day.
Example Control Frame Work:
All tables should be edited / write to using Stored Processes
Create Stored Processes that checks the table lock before edit / write to the tables,
Users will use the SP to write to the tables,
If two users run the same SP at the same time: The second SP to run will see the lock flag and print a message to the user to run the SP again in few mins.
Situation
I have a CSV file called inventory.csv located on an Oracle database server (2008 R2 Enterprise Edition Windows Server). This CSV file is used as an Oracle external table.
Every hour, a scheduled task (Windows Task Scheduler) executes a .bat file that copies over an updated version inventory.csv, overwriting the original.
The data is then used by a reporting application.
Problem
The application that uses the data in inventory.csv has no way of knowing when the data was last updated.
Ideally, I'd like the "last updated date" to be accessible as a column in the table.
One possible solution is to trigger a logging of the current date/time in a separate file, an then referencing that as an external table as well. However, this solution has too many moving parts, and I'd prefer something simpler, if possible.
I know that the CSV file itself knows when it was created...I'm wondering if there is any way for the Oracle external table to read the "Created" date from the CSV file properties?
Or any other ideas?
What version of Oracle?
If you are using 11.2 or later, you can use the preprocessor feature of external tables to run a shell script/ batch file on the file before it is loaded. My bias would be to go for simplicity-- have the preprocessing script grab the date, store it to a separate file, and have a separate external table that loads and exposes that data. That's likely easier than adding the date to every row.
Until very recently we ran a 3rd party HR database on an Oracle Unix environment. I have additionally set up various web services that hit stored procedures to carry out a few bespoke processes for our users, and all ran well for years.
However, now that we have moved to Oracle on a Windows environment there is suddenly a big problem.
The best example I have is a VB.Net solution that reads in a 2000 row CSV of employees into a datatable, runs a couple of stored procedures to bring back Post Id etc, populates a database table with the results, then feeds it all back out into a new CSV. This process used to take 1-2 minutes to complete on Unix. It now takes well over 2 hours and kills the server!
The problem manifests by overwhelming the CPU on the database server. Any stored procedure call sends Oracle.EXE into overdrive, completely max-ing out the CPU core that it's using such that no other stored procedures can be run and everything grinds to a halt.
We have run Oracle Enterprise Manager, which suggested the creation of some indexes etc, but nothing will improve the issue. Like I say, the SQL ran fine and swiftly for years, and it hasn't changed at all.
Does anybody know what could be causing this? I am completely at a loss.
The way I see it, it must either be:
1. A CPU/hardware issue (but we have investigated, added extra cores etc to no avail)
2. An Oracle configuration issue?; or
3. An issue with the 3rd party database (which is supposedly identical to what it was on Unix).
Thanks to anyone who read this far.
P.S. I've had a Stack Overflow user account for years but can't get logged into it any more. Back to noobie status for me!
I have created an application with internal database LightSwitch..
Now I want to publish my application and I want to publish also data of my internal database..How can I do?
for example : I have an application Fantacalcio and I created some players in my internal database of lightswitch..now when I publish my application and I install it in my pc there are no data in my application.. I want that when I install my application there must be players that I have created before..
You can do it programmatically in something like Application_Initialize, or in a SQL script.
LS has no "built-in" way to pre-populate data, so it's a matter of choosing a workaround.
One possible way is to do the following:
Attach the lightswitch internal database to SQL server
Export all the data into a SQL script, here are the instructions
After you have the sql script (mostly INSERT statements), then run
the script on your designated database.
The exact same data should now be populated there.
I have a WPF application with back-end as Oracle11gR2. We need to enable our application to work in both online and offline(disconnected) mode. We are using Oracle standard edition(with single instance) as client database. I am using Sequnece Numbers for Primary Key Columns. Is there anyway to sync my client and server database without any issues in Sequence number columns. Please note that we will restrict creation of basic(master) data to be created only in server.
There are a couple of approaches to take here.
1- Write the sync process to rebuild the server tables (on the client) each time with a SELECT INTO. Once complete, RENAME the current table to a "temp" table, and RENAME the newly created table with the proper name. The sync process should DROP the temp table as one of its first steps. Finally, recreate the indexes and you should be good-to-go.
2- Create a backup of the server-side database, write a shell script to copy it down and restore it on the client.
Each of these options will preserve your sequence numbers. Which one you choose really depends on your skills. If you're more of a developer, you can make #1 work. If you've got some Oracle DBA skills you should be able to make #2 work.
Since you're on 11g, there might be a cleaner way to do this using Data Pump.