mkfs.ext2 in cygwin not working - windows

I'm attempting to create a fs within a file.
under linux it's very simple:
create a blank file size 8 gb
dd of=fsFile bs=1 count=0 seek=8G
"format" the drive:
mkfs.ext2 fsFile
works great.
however under cygwin running from /usr/sbin ./mkfs.ext2
has all kinds of weird errors (i assume because of some abstraction)
But with cygwin i get:
mkfs.ext2: Device size reported to be zero. Invalid partition specified, or
partition table wasn't reread after running fdisk, due to
a modified partition being busy and in use. You may need to reboot
to re-read your partition table.
or even worse (if i try to access a file through /cygdrive/...
mkfs.ext2: Bad file descriptor while trying to determine filesystem size
:(
please help,
Thanks

Well it seems that the way to solve it is to not use any path on the file you wish to modify.
doing that seems to have solved it.
also it seems that my 8 gig file does have a file size that's simply to big, it seems like it resets the size var i.e.
$ /usr/sbin/fsck.ext2 -f testFile8GiG
e2fsck 1.41.12 (17-May-2010)
The filesystem size (according to the superblock) is 2097152 blocks
The physical size of the device is 0 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? no
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
testFile8GiG: 122/524288 files (61.5% non-contiguous), 253313/2097152 blocks
Thanks anyway

Related

Input / Output error when using HDFS NFS Gateway

Getting "Input / output error" when trying work with files in mounted HDFS NFS Gateway. This is despite having set dfs.namenode.accesstime.precision=3600000 in Ambari. For example, doing something like...
$ hdfs dfs -cat /hdfs/path/to/some/tsv/file | sed -e "s/$NULL_WITH_TAB/$TAB/g" | hadoop fs -put -f - /hdfs/path/to/some/tsv/file
$ echo -e "Lines containing null (expect zero): $(grep -c "\tnull\t" /nfs/hdfs/path/to/some/tsv/file)"
when trying to remove nulls from a tsv then inspect for nulls in that tsv based on the NFS location throws the error, but I am seeing it in many other places (again, already have dfs.namenode.accesstime.precision=3600000). Anyone have any ideas why this may be happening or debugging suggestions? Can anyone explain what exactly "access time" is in this context?
From discussion on the apache hadoop mailing list:
I think access time refers to the POSIX atime attribute for files, the “time of last access” as described here for instance (https://www.unixtutorial.org/atime-ctime-mtime-in-unix-filesystems). While HDFS keeps a correct modification time (mtime), which is important, easy and cheap, it only keeps a very low-resolution sense of last access time, which is less important, and expensive to monitor and record, as described here (https://issues.apache.org/jira/browse/HADOOP-1869) and here (https://superuser.com/questions/464290/why-is-cat-not-changing-the-access-time).
However, to have a conforming NFS api, you must present atime, and so the HDFS NFS implementation does. But first you have to configure it on. [...] many sites have been advised to turn it off entirely by setting it to zero, to improve HDFS overall performance. See for example here ( https://community.hortonworks.com/articles/43861/scaling-the-hdfs-namenode-part-4-avoiding-performa.html, section "Don’t let Reads become Writes”). So if your site has turned off atime in HDFS, you will need to turn it back on to fully enable NFS. Alternatively, you can maintain optimum efficiency by mounting NFS with the “noatime” option, as described in the document you reference.
[...] check under /var/log, eg with find /var/log -name ‘*nfs3*’ -print

Is it possible to log the output of a uboot command into a file from u-boot prompt?

I am working with an embedded board which supports u-boot.
I am trying to write and read the emmc device connected to the board,
After read, i need to have a look at the contents and compare it with the data that I have written to it.
Is there a way I can log the output of the a u-boot command, when I read a block from eMMC and store it in an address and try to view the contents of
it using:
mmc read 0x10700000 133120 1
mm.l 0x10700000
into a file and then can store the file in an emmc partition or a tftp server ?
Thank you for your time,
Nishad
The save command can be used to write memory to a file.
save file to a filesystem
save <interface> <dev[:part]> <addr> <filename> bytes [pos]
- Save binary file 'filename' to partition 'part' on device
type 'interface' instance 'dev' from addr 'addr' in memory.
'bytes' gives the size to save in bytes and is mandatory.
'pos' gives the file byte position to start writing to.
If 'pos' is 0 or omitted, the file is written from the start.
Of cause this requires the file system to be writable. For FAT this implies building with CONFIG_FAT_WRITE=y.

what is the file ORA_DUMMY_FILE.f in oracle?

oracle version: 12.2.0.1
As you know, these are then unix processes for the parallel servers in oracle:
ora_p000_ora12c
ora_p001_ora12c
....
ora_p???_ora12c
They can be seen also with the view: gv$px_process.
The spid for each parallel server can be obtained from there.
Then I look for the open files associated with te parallel server here:
ls -l /proc/<spid>/fd
And I'm obtaining around 500-10000 file descriptors for several parallel servers equal to this one:
991 -> /u01/app/oracle/admin/ora12c/dpdump/676185682F2D4EA0E0530100007FFF5E/ORA_DUMMY_FILE.f (deleted)
I've deleted them using:(actually I've create a small script for doing it because there are thousands of them)
gdb -p <spid>
gdb> p close(<fd_id>)
But after some hours the file descriptors start being created again (hundreds every day)
If they are not deleted then eventually the linux limit is reached and any parallel query throws an error like this:
ORA-12801: error signaled in parallel query server P001
ORA-01116: error in opening database file 132
ORA-01110: data file 132: '/u02/oradata/ora12c/pdbname/tablespacenaname_ts_1.dbf'
ORA-27077: too many files open
Does anyone have any idea of how and why this file descriptors are being created, and how to avoid it?.
Edited: Added some more information that could be useful.
I've tested that when a new PDB is created a directory DATA_PUMP_DIR is created in it (select * from all_directories) that is pointing to:
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>
The linux directory is also created.
Also one file descriptor is created pointing to ORA_DUMMY_FILE.f in the new dpdump subdirectory like the ones described initially
lsof | grep "ORA_DUMMY_FILE.f (deleted)"
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>/ORA_DUMMY_FILE.f (deleted)
This may be ok, the problem I face is the continuos growing of the file descriptors pointing to ORA_DUMMY_FILE that reach the linux limits.

Meaning of ST_INO (os.stat() output) in Windows OS

Can anyone tell me what the meaning of the value for st_ino is when running os.stat() on Windows (Python 3.5.3)?
In earlier Python releases, this contained dummy values, but this has recently changed and I couldn't find how it is calculated/generated. I suspect that it differs based on the file system (NTFS, FAT, ...)
Example
import os
stat = os.stat(r'C:\temp\dummy.pdf')
for attr in dir(stat):
if attr.startswith('st_'):
print('{}: {}'.format(attr,
stat.__getattribute__(attr)))
Result
st_atime: 1495113452.7421005
st_atime_ns: 1495113452742100400
st_ctime: 1495113452.7421005
st_ctime_ns: 1495113452742100400
st_dev: 2387022088
st_file_attributes: 33
st_gid: 0
st_ino: 10414574138828642
st_mode: 33060
st_mtime: 1494487966.9528062
st_mtime_ns: 1494487966952806300
st_nlink: 1
st_size: 34538
st_uid: 0
Background
I used the shutil.copyfile() function an ran into an SameFileError. After looking through the code (and despite what it says in the comments of shutil.py) the shutil._samefile() function does not compare pathnames in Windows. Instead, it uses os.path.samefile() which compares the st_ino and st_dev values.
Both source and target file resided on the same device (volume), which would explain why the value for st_dev was the identical. But I'm still puzzled as to why st_ino had the same value for both files.
Remark: both files were on a Sharepoint volume mounted using webDAV, so their st_ino value may have been 0 (dummy), which would explain why they were equal. I'm still curious though ;-)
Update
As I suspected, the value for st_ino returned for the files residing on the Sharepoint volume (WebDAV), was 0, as was the value for st_dev. This is the reason for (incorrect) SameFileError. Example output:
\\sharepoint#SSL\AUT.pdf os.stat_result(st_mode=33206, st_ino=0, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=4717, st_atime=1495031011, st_mtime=1495031011, st_ctime=1495031570)
\\sharepoint#SSL\ING.pdf os.stat_result(st_mode=33206, st_ino=0, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=4722, st_atime=1495031203, st_mtime=1495031203, st_ctime=1495031733)
\\sharepoint#SSL\WAG.pdf os.stat_result(st_mode=33206, st_ino=0, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=4710, st_atime=1495031511, st_mtime=1495031511, st_ctime=1495031912)

Couch has apparent limit of attachment sizes on Mac OS X

I have plain vanilla CouchDB from Apache, which runs as an App running on a Mac OS X 10.9. If I try to attach an attachment to a document that is above 1 Meg in size, it just hangs and does nothing.
I have tried to use couchdbs on Linux, and there the sky is the limit.
I first thought it had to do with low limits on the mac but it doesn't seem so :
➜ ~ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 709
-n: file descriptors 256
What is causing this ? Why ? And how to fix this ?
Check the config files given by couchdb -c. You probably have this somewhere in them (for some unknown reason):
[couchdb]
max_attachment_size = 1048576 ; bytes
Remove or comment the line and you should be fine.
Or maybe it was compiled with this hardcoded so you could add this line to one of the config file and increase the value.
Update
max_attachment_size is undocumented so probably not safe to use. I leave the original answer as it seems to have solved the problem of the OP but according to the docs, the attachment size should be unlimited. Also attachment_stream_buffer_size is the config key controlling the chunk size of the attachments which might relevant.

Resources