user_dump_dest is there harm in putting this on a NAS - oracle

Is it correct to say that typically user_dump_dest is on a local drive?
If so, are there issues with mounting a NAS volume to both Unix and Windows and pointing user_dump_dest at that?
If so, what are they?
Are any issues worth not doing this in prod?

I've run 9.2 instances with user_dump_dest on a NAS and never had a problem with it.
If you are concerned though, have oracle write them locally, then sync them across to your NAS and remove them from local, I've never needed to do that though.

Yes, typically (and by default), user_dump_dest is on a local drive. I wouldn't expect there are any specific issues with putting it on NAS; but it would have all the potential issues that any application might: (1) If the NAS could not be reached, Oracle would not be able to write out user dump files, and (2) the write latency would be increased. The latency issue is probably only important if you're doing large-scale tracing to diagnose performance issues, as the write latency would impact the timing measurements. I would expect that if the NAS because unreachable and Oracle tried to write to it, it would silently fail.
I don't think I would do it unless space constraints on the local disk forced me to. But generally the contents of this directory should be insignificant relative to your data.

Related

Full Access to HDD Without Removal

I work on-the-side doing computer repair. Standard operating procedure is to pop out the HDD/SSD, mount it to a backup machine, and pull the client's data (i.e., in case the drive fails/something goes horribly wrong, their data is protected). More and more often, my office is seeing SSDs soldered directly to the motherboard, making this technique impossible.
I was wondering if any of you knew of a some method that would allow direct disk access without drive removal. An analogue would be mounting a phone in Mass Storage Device mode, I suppose. This may be possible already by doing something with a Linux LiveUSB, but I'm not sure how. Booting from a LiveUSB and transferring files over the network is unacceptably slow given the volume of computers we see and amount of data involved.
On Apple computers, this is simple--plug in a Thunderbolt/Firewire connector and use Target Disk Mode to pull directly from the drive.
tl;dr: making a backup of a Windows computer without opening them: how do?
Boot a live Linux from a large USB3 HDD, and use the same disk to copy the client's data to with about USB3 speed.

Where is data on a non-persistant Live CD stored?

When I boot up Linux Mint from a Live CD, I am able to save files to the "File System". But where are these files being saved to? Can't be the disc, since it's a CDR. I don't think it's stored in the RAM, because it can only hold so much data and isn't really intended to be used as a "hard drive". The only other option is the hard drive... but it's certainly not saving to any partition on the hard drive I know about, since none of them are mounted. Then where are my files being saved to??
Believe it or not, it's a ramdisk :)
All live distros mount a temporary hard disk in RAM memory. The process is completely user-transparent and is all because of the magic of Linux kernel.
The OS, in fact, first allocates an area of your RAM memory into a virtual device, then mounts it as a regular hard drive in your file system.
Once you reboot, you lose all your data from that ramdrive.
Ramdrive is needed by almost all software running on Live CDs. In fact, almost all programs, in particular desktop managers, are designed in order to write files, even temporary, during their execution.
As an example, there are two ways to run KDE on a Live CD: either modify its code deeply in order to disallow you to change wallpaper etc. (the desktop settings are stored inside ~/.kde) or redeploy it onto a writable file system such as a ramdrive in order to avoid write fails on read-only file systems.
Obviously, you can mount your real HDD or any USB drive into your virtual file system and make all writes to them permanent, but by default no live distro mounts your drives into the root file system, instead they usually mount into specific subdirectories like /mnt, /media, /windows
Hope to have been of help.
It does indeed emulate a disk using RAM; from Wikipedia:
It is able to run without permanent
installation by placing the files that
typically would be stored on a hard
drive into RAM, typically in a RAM
disk, though this does cut down on the
RAM available to applications.
RAM. In Linux, and indeed most unix systems, any kind of device is seen as a file system.
For example, to get memory info on linux you use cat /proc/meminfo, where cat is used to read files. Then, there's all sorts of strange stuff like /dev/random (to read random crap) and /dev/null (to throw away crap). ;-)
To make it persistent - use a USB device - properly formatted and with a special name. See here:
https://help.ubuntu.com/community/LiveCD/Persistence

How can I force Windows to clear all disk read cache data? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to invalidate the file system cache?
I'm writing a disk intensive win32 program. The first time it runs, it runs a lot slower while it scans the user's folders using FindFirstFile()/FindNextFile().
How can I repeat this first time performance without rebooting? Is there any way to force the system to discard everything in its disk cache?
I know that if I were reading a single file, I can disable caching by passing the FILE_FLAG_NO_BUFFERING flag to a call to CreateFile(). But it doesn't seem possible to do this when searching for files.
Have you thought about doing it on a different volume, and dismounting / remounting the volume? That will cause the vast majority of everything to be re-read from disk (though the cache down there won't care).
You need to create enough memory pressure to cause the memory manager and cache manager to discard the previously caches results. For the cache manager, you could try to open a large (I.e. Bigger than physical ram) file with caching enabled and then read it backwards (to avoid any sequential I/o optimizations). The interactions between vm and cache manager are a little more complex and much more dependent on os version.
There are also caches on the controller (possibly, but unlikely) and on the disk drive itself (likely). There are specific IoCtls to flush this cache, but in my experience, disk firmware is untested in this arena.
Check out the Clear function of CacheSet by SysInternals.
You could avoid a physical reboot by using a virtual machine.
I tried all the methods in the answers, including CacheSet, but they would not work for FindFirstFile/FindNextfile(). Here is what worked:
Scanning files over the network. When scanning a shared drive, it seems that windows does not cache the folders, so it is slow every time.
The simplest way to make any algorithm slower is to insert calls to Sleep(). This can reveal lots of problems in multi-threaded code, and that is what I was really trying to do.

let's say I am writing my code and then my PC died, how necessary is it to do a complete scan if i don't want my later source code to be contaminated?

let's say I am writing a Ruby on Rails program and while editing a file, the machine blue screened. in this case, how necessary is it to re-scan the whole hard drive if I don't want my future files to be damaged?
Let's say if the OS is deleting a tmp file at the moment when my computer crashed, and still have some pointers to some sector on the hard drive. and if my newly created files happen to be in those sector, and next time the OS clean up files again, it may think that the "left-over" sector wasn't cleaned last time and clean it again, and damaging our source code. (esp with Ruby on Rails, where the source code could be generated by rails and not by us, and we may not know why our rails server doesn't work, if a file is affected). we can rely on SVN, but what if the file is affected before we check it in?
i think the official answer will be: "always scan the disk after a crash or power outage, for the data and even the space and indicate attempt to fix any bad sector", but the thing is, nowadays with the hard drive so big, it could take 2 hours to scan everything. And especially at work, we cannot wait for 2 hours if it is the middle of the day.
Does someone know if the modern OS, like XP, Vista, Mac OS, and Linux (when sometimes the power cord was loose and it didn't shut down properly and just shut down on 0% battery), with these modern OS, are our source code safe? Do they know how to structure to write to sector so that at most it will waste sector instead of overlapping sectors?
With a modern journaling file system (ext3/4, NTFS), the only problem would be that a file could be in a "half-written" state. Obviously scanning is not going to help this (that's what backups are for). The file system itself could not be corrupted. If you are using something like FAT, then yes, you should worry about this.
There's really only 1 issue here.
Is any file currently being written in some kind of "half written" state.
The primary cause of this would be if the application/editor is writing the file and the machine dies halfway through. In this case, the file be written is, well, half done. If it was over writing the original file, the original file is "gone", and the new one is "half done". If you don't have a back up file, then, well, you have a problem.
As far as a file having dangling pointers, or references to sectors not written, or somesuch thing. That problem depends on your file system.
The major, modern files ystems are journaled and "won't allow" this to happen. You may have a "half written", but that's because the application only got to write half of it, rather than the file system losing track of a sector pointer.
If you're playing file system games for performance, or whatever (such as using a UFS without logging), then you would want to run a fschk to clean up the file systems meta data.
But if you're using a modern operating system and file system (i.e. anything from the past 5 years), you won't have this problem.
Finally, if you do have version control running, then just do an "svn status", it will show you any "corrupted" files as they will have changed and it will detect that as well.
i see some information on
http://en.wikipedia.org/wiki/Journaling_file_system
Journalized file systems
File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes some information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. Journaling is handled by the file system driver, and keeps track of each operation taking place that changes the contents of the disk. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. Many UNIX file systems provide journaling including ReiserFS, JFS, and Ext3.
In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk for any inconsistencies after an unclean shutdown. Soft updates is an alternative to journaling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.

Compiling code on an external drive

To make things easier when switching between machines (my workstation at the office and my personal laptop) I have thought about trying an external hard drive to store my working directory on. Specifically I am looking at Firewire 800 drives (most are 5400 rpm 8mb cache). What I am wondering is if anyone has experience with doing this with Visual Studio projects and what sort of performance hit they see.
It depends on the size of the project. The throughput is low and the latency is high, so you're going to get hit every which way, but due to the latency you'll be hit harder if you have a lot of little files rather than a few large ones.
Have you considered simply carrying around a GIT or other distributed repository and updating the machine repositories as you move around? Then you can compile locally and treat the drive and a roving server. Since only changes will be moved across, it should be faster, and your code will be 'backed up' in more places.
If you forget the drive, it breaks, or is lost/stolen, then you can still sit down at a PC and program with no code missing if you're at the last PC you used, or very little code missing (which will be updated later with a resync anyway).
And it's just a hop skip and a jump away from simply using the network to move the changes between the systems if you don't want to carry the drive around later.
I use vmware and the virtual machines are on an external usb drive. Performance is fine. You might have some issues with the drive name changing - not an issue if you use virtual machines.
Granted I work in an industry were Personal Information and Intellectual Property are king, but I don't like that idea at all. That hard drive disappears and you have a big problem.
Why not Remote Desktop into the work machine?
EDIT Stipud Spelingg

Resources