Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
In linux, if I have a file I'm sharing with a group, and I put the file on a USB memory stick, for example, and copy it to a computer that doesn't have the same group or users, does the file have no permissions for anyone on that new computer? What if I bring a linux file that only lets user X to read it to a windows machine? Who gets to read it on the windows machine? since user X (and group) doesn't exist on that machine.
What kind of security do I get copying a linux file to another linux machine? how about to a windows machine?
What kind of security do I get copying a windows file to another windows machine? how about to a linux machine?
Please let me know.
jbu
Regarding the USB key: generally, USB keys use one of the FAT family of filesystems; FAT doesn't support security at all, so as soon as you copy the file to it the security information is lost. So for your first question, anyone who has the USB key can read it on any computer from any user account. It is possible to format USB keys using another filesystem (for example, NTFS, which does support security); in that case, if the accounts (in Windows, at least, it must be a domain account or similar, just naming two accounts the same will not do it) do not exist on the target computer, only a user who can ignore filesystem permissions (such as root on *nix or Administrator on Windows) will be able to access the file.
For the second, I'm not 100% sure but I believe it depends on how you copy it; things like FTP and rcp generally don't copy permissions over, so I would assume that the file gets some kind of default permissions for the target directory, or a default built into the copy program, depending on what the copy program does.
For windows, to the best of my knowledge the security descriptor is initially inherited from the target folder; permissions are, again, not persisted across machines. It can be modified after the copy.
In general, except in specific environments that are designed to transfer permissions, I would assume that transferring any file from one computer to another resets the security permissions to a default (generally whatever a new file in that location would receive).
as technophile said, removable drives usually use FAT filesystems, so no permission info is copied at all.
on more 'direct' copies between *nix machines, if the writing process is run under root, usually there are flags to preserve permission bits and owner/group. also, most of them preserve user/group identities by the numbers. if there's no 'global' user identity database (LDAP, NIS, or even AD), be sure to look for a 'by name' identity.
some examples:
NFS: assumes 'identity by number', unless you use some 'squash' option to make every file the same owner/group.
cp: the '-p' flag preserves mode, ownership (by number) and timestamp.
scp: the '-p' flag preserves modes, but (usually) not ownership
rsync: only root can preserve ownership (-o,-g, or -p), tries to match usernames, but falls back to userids if not possible.
Why bother with permissions?
They get in the way most of the time unless you are running some sort of server.
Perhaps copy from linux FS to a FAT32, exfat, or NTFS FS so you don't have to deal with permissions?
That is what I do. I usually choose NTFS for file 'sharing' between desktop and laptop where laptop has linux and desktop has windows 7. Cannot easily do *nix laptop to *nix desktop without doing chmod multiple times (and even THAT doesn't guarantee R/W permissions)
When I tried to share between *nix's, everything was quite bad.
I need FULL read/write access by everyone on any box from any external drive.
Only problem with ntfs is if your *nix doesn't write to it or shut it down correctly.
Then I have to use windows to fix it (pain too). Hence one of the reasons I keep windows around.
Every flash and external drive I have are all NTFS except 2 of them which are fat 32 to 100% GUARANTEE no foul ups with linux demanding permissions (which many times I cannot change for some reason even with chmod).
Of course my data is plain old movies music pictures, similar domestic items.
But the same theory holds that if you don't or can't write permissions with the file, anyone should be able to use the file from any operating system.
I have gone so far as to copy a stubborn file onto a fat 32 flash drive just to strip permissions then copy it back. HATE typing command line stuff.
For me I need 100% read, write access to ANY data I have on external drives for all computers.
About using root - most linux 'suppliers' strongly discourage the use of root for doing many things.
Again, easy way around permissions is if you can copy it you can strip it by sending it to fat32. Or ntfs. And there goes the security.
If something is so sensitive that you NEED file security when file sharing then why share it in the first place?
If you want to prevent tampering with a file then burn it to a cd/dvd. That is read only. Even if they copy it and tamper with it the original is still untouched.
Related
If I commit some code on a Windows machine with an NTFS hard drive into git, and then check it back out again in a another directory, will it retain my original Owner, NTFS permissions ACLs, and file attributes?
If so will it automatically break inheritance to do so, or does it require a setting of some sort?
GIT is a platform-independent code management tool, and can be run on numerous different operating systems. As a result, it is necessarily indifferent to any particular platform's notions of security or access control information. Security metadata about a file in Windows would be meaningless in, for example, a Linux environment, and vice-versa.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
Chroot is often assimiled to be a kind sandbox. But in Unix, it also allow use of programs on certain non-bootable installation.
When I search chroot for windows: I see things like sandbox. I don't want security, I want a way to rescue the system. By example, if I disabled syskey with ntpasswd, running C:\windows\system32\syskey.exe with a such utility would modify the registry entries of the of the offline installation, not the current one.
It could be called runon similar as runas for alternative users.
What chroot would means here for windows? Well, there is winre which allow having the same drive letters of your windows installation. There is an example: compact.exe is not present on winre installs. if you cd to \%Windir%\system32 (the directory of the offline install) and run compact, it won't work(except if you use it with /?). If you run
X:\sources\>path C:\Windows\System32\
You now use the files present in your offline windows. Base dlls such as ntdll.dll or eventually gdi.dll are those from C:\Windows\System32 instead of X:\%windir%\system32 and running compact will work.
But Programs runned by this way will use the current registry. The main keys (HKLM; HKCC; HKCR; HKCU; HKU; HKEY_PERFORMANCE_DATA) with their contents, are those of the current winre/pe installation, not those you have when you booted in your windows. So, if a program want to modify some registry entries, it will modify the Hives of X:\windows\system32\config not those of located in the C:\ systemdrive.
It is possible to mount the Hives of your offline windows under HKLM and edit them, but the programs which have their informations in HKLM\Software would still look at HKLM\Software and not at the name you mounted it.
The utility I am looking for would (partially?) hide the registry of winpe/re in favour of the one present in the offline install. The expected effect is that if you launch the registry editor with the utility, you will see the keys as if you would have booted into windows. (Maybe with some exceptions?)
The application would still use the Microsoft services of the current windows. I'd like launching services installed on the offline windows that are not installed on the current one. It would be nice to do this even for kernel ones. By this way, you would have the same behaviour when you launch sysv daemons in unix. Except here some mechanism for avoiding dual instances could be necessary, because the problems would be more critical on windows
The user access rights are an important part in the Microsoft systems. Specifying a user name and password in the parameters of the command line could be necessary. Some problems occurs with a bad User database configuration and prevent windows to boot. If want to enable syskey again, It would need to have the authentication informations which couldn't be used. But in some case like syskey problems, it make windows in endless reboots. I think one possibility would be to find a way to mount the user Hives by providing their path instead of login informations. Or if it is impossible, try to keep the user keys/informations of the current booted windows.
I don't know if a utility like this exist. I'd like help for programming it with mingw from linux (I can't have Visual Studio) . It would be good if it don't need to be installed. I would like it don't use .NET or the full windows API, because I would like to see it working under winre. I write for C/C++ under linux, but I never done it for windows. The only experience I have is provided by the fact I managed to build 7-Zip with winebuilder. I just know that the main function is called "main" for console programs and "WinMain" for windows ones. I am not familiar with WINAPI nor nt API. I just know there are not real equivalent to the chroot() of the Unix API.
I hope this is possible, thanks in advance.
The answer, after some review, is No. You can't do that. There are too many embedded references to HKEY_LOCAL_MACHINE in the various system DLLs; at the very best, you would end up with a very buggy system (since different parts of the system would be seeing different views of the machine configuration.)
Many questions on SO say "Windows developer guidelines" or "windows design guidelines" say that you shouldn't write temporary or program data to the Program Files area, but as far as I can tell none of them actually link to a piece of documentation that says as much. Searching the MSDN has yielded me no results. Windows will make the area read-only, so it can be enforced by the OS, but that doesn't mean developers didn't try to write there anyway (e.g., when porting older, XP and earlier based programs forward.)
I realize that it seems odd to ask about it this late into Windows development (since, as a commenter below pointed out, has been enforced by the OS for more than a decade), but a document that says so is sometimes necessary to satisfy people.
With that in mind, Does Microsoft have a document published stating we shouldn't write application data to the Program Files area, and if so, where is it?
From Technical requirements for the Windows 7 Client Software Logo Program:
Install to the correct folders by default
Users should have a consistent and secure experience with the default
installation location of files, while maintaining the option to
install an application to the location they choose. It is also
necessary to store application data in the correct location to allow
several people to use the same computer without corrupting or
overwriting each other's data and settings.
Windows provides specific locations in the file system to store
programs and software components, shared application data, and
application data specific to a user:
Applications should be installed to the Program Files folder by default. User data or application data must never be stored in this
location because of the security permissions configured for this
folder (emphasis added)
All application data that must be shared among users on the computer should be stored within ProgramData
All application data exclusive to a specific user and not to be shared with other users of the computer must be stored in
Users\<username>\AppData
Never write directly to the "Windows" directory and or subdirectories. Use the correct methods for installing files, such as
fonts or drivers
In “per-machine” installations, user data must be written at first run and not during the installation. This is because there is no
correct user location to store data at time of installation. Attempts
by an application to modify default association behaviors at a machine
level after installation will be unsuccessful. Instead, defaults must
be claimed on a per-user level, which prevents multiple users from
overwriting each other's defaults.
And I'm quite sure that there's similar stuff for every Windows version of the NT family going back to Windows NT 4 or even earlier.
See also this question.
Edit: the original link in this post to the Windows 7 Logo program exists no more. Here you find the current link to the Certification requirements for Windows Desktop Apps. See Section 10, Apps must install to the correct folders by default
In later versions of windows (Vista, 7 and of course server versions) access permission are restricted for "special folders" including "Program Files". Even if your program is elevated to have sufficient privileges to write to this folder it is still a bad idea.
I don't know of any guidelines that state this but there is a list of special folders and what they are meant for. The fact that there is a special folder for nearly all types of data I can image means there is no need to use the program files folder.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Is there a way I can launch a RDP session to a remote Windows server, and perform a file transfer to the local computer? Versions of the remote Windows Server varies. Ranges anywhere from 2000 to 2008.
I've tried to look up solutions and it seems scattered everywhere. Some suggest using mstsc.exe, others suggest PowerShell / Java / ASP Net. I'm confused. Appreciate some guidance here.
Thanks!
Update Below: 17 Feb 2012
Thanks for all suggestions. Would like to add that the remote servers are securely locked down and I'm not allowed to install SSH servers, FTP servers, or shared drives. The only way for accessing the remote machine is through RDP, and these machines are also on separate VLANs to which only authorised users can use RDP to access these machines. I'm trying to create a script that can help authorised users to download the required files.
You can map a drive using remote desktop.
Options > Local Resources > More
Ctrl + C at the Remote Desktop, and Ctrl + V at local, if you not looking for any automated solution. (Please check RD Config to enable copy and paste)
Once you have mapped the drives you want using mstsc, you can use \\tsclient to access the file system of the local machine i.e the Terminal services client from which you have RDP'ed on to the remote box.
If all you are trying to do is copy file from a remote box, just do \\machine\c$\path etc or share the folder and do \\machine\share to get them. RDP is not necessary in this case.
Once you have mapped the needed drives as Andy says, you can execute remotely a LOCAL batch file every time you connect specifying it's local path (using \\tsclient\c to refer your local drive) in the Programs tab at RDP properties.
Remember to write cmd /c before that path.
The rdp connection will automatically close once the batch file ends, but you can add the pause command to the end to see what happened during execution.
Connecting this way, you can edit the batch file before connecting.
Make sure your remote Machine enabled PSRemoting by running the following command in PowerShell
Enable-PSRemoting –Force
From the client computer, run the following command to establish the connection.
net use "\\{RemoteIP}\c$" "{Password}" /USER:"{Username}" /persistent:no
Here after you can use Copy-Item, Delete-Item over the network.
Copy-Item [PACKAGEPATH]\* \\[COMPUTER]\c$\installers -recurse
In Client machine, Run->mstsc.exe-> Local Resources-> enable clipboard.
In remote machine-> windows run command (Windows Key + R).
Open cmd->(Taskkill.exe /im rdpclip.exe) type brackets command
You got "Success", then
Type same command prompt "rdpclip.exe"
Now copy and paste both, its working fine
You can copy and paste files over RDP, it works perfectly. See http://www.reddit.com/r/sysadmin/comments/1d6a1o/til_you_can_copy_and_paste_files_over_rdp/ for more info.
eug wrote what I thing is an extremely useful comment that seems to have overlooked by everyone:
You can very easily share a single folder by using subst to map it to a drive letter, and then selecting that drive in remote desktop.
Note that it's fairly easily to have problems with this method due to subst performing the mapping only for the user under which it is run.
So I recommend to run everything from a single command prompt:
Open a command prompt (Win+R -> cmd)
Type subst <lettertomap>: <pathtofolder>
Type mstsc (which launches Remote Desktop)
Keep in mind that the subst mappings are not persistent across reboots, of course, so this is mostly convenient for a one-time session of file transfer.
There are actually also other ways to do the mapping, see raymond.cc .
And yes, the mapping does seem to disallow access to the rest of the drive, although I wouldn't bet my life that it doesn't have chroot-like "vulnerabilities" (assuming it is supposed to be secure in the first place).
1) Install dropbox or equivalent cloud storage product and sync needed files that way between computers. Remember, you can allow only certain folders to be synced on specific devices (you don't have to sync the entire dropbox, just the folders you need)
2) If you are allowed to setup more than one user on the remote server, have a 2nd user and then have user2 session connect rdp session to user1. This will keep the user1's gui alive in the cloud without having to remain logged in to rdp locally.
This video should show you how to implement this 2 user setup on your server to hold an rdp session open. Note that this does 'permanently' use 1 rdp session until you decide to close it.
[markdown cannot embed video :( ]
Then use AmmyyAdmin AnyDesk on user1's desktop to connect and manipulate the desktop. This includes using AnyDesk's file manager's ability to browse any folder you need and copy. AnyDesk can be free if you connect via direct IP connection. Most vps servers have dedicated IP addresss or subdomain address so this should not be a problem. Good idea to password protect your AnyDesk login and which IDs have access to unattended remote connections. The AnyDesk file manager is a bit crude, but it works. Their big thing is simplicity and speed.
Note: Use portable mode only on the remote user's desktop; Do NOT fully install AnyDesk. Also, the CPU usage might increase to stream the desktop screen, somewhat related to the size of the RDP window. I am using 1280 x 2048 window with 4 cores and the CPU usage is 22-25% idle or moving things around. This might decrease if there is more video ram or graphics processor on the target server. But, if you only "browse files" (use only the file manager without streaming the desktop), CPU usage >0.3% idle and >1 avg% when transferring files (burst up to 5-6% when the file is finished uploading and the pieces are being finalized).
You'd have to write your own scripts (java, .net, c#/c++, AutoIT, etc) to launch AnyDesk locally and automate the connecting and downloading specific files.
This strategy is a bit more complex, but it should do the job. Not sure why microsoft rdp cannot have some simple, quick file manager like what ammyy admin AnyDesk has; oh well.
Add: Can also use AnyDesk or Teamviewer. Teamviewer became a lot more restrictive on what is considered to be "non commercial use", but Anydesk is secure, much smaller footprint, and if you can have a direct connection doesn't seem to care too much about usage. If you do need a license, it will be much lighter on the wallet.
AnyDesk works flawlessly without any installation required. In fact, if using in a server environment as I described above, no installation is recommended.
Edit: AmmyyAdmin is no longer recommended for several months now due to some security and technical concerns. Added AutoIT as a scripting capability to automate interaction with GUI/nearly any windows function.
I have content on a portable HDD that is to be shared between 2 or more computers, but none of the computers are connected to a domain (none exists). I want to give permissions to the content in such a way that the permissions remain the same across all my computers, irrespective of which computer I connect the HDD to and irrespective of which user account was used to set the permissions.
For example, I want the built-in Administrators group (SID: S-1-5-32-544) to have Full Control of a file on the portable HDD, irrespective of the computer it is connected to (I am aware this constitutes a big security hole, but so long as the drive doesn't get stolen, I am ok with it. Anyway, once an attacker has physical access to a drive, all bets are off.).
Problem I am trying to solve is this: I connect the HDD to computer1, set all permissions, disconnect. Then I connect the HDD to computer2, and suddenly the permissions aren't right for the user on this computer since the SIDs are different (both in terms of permissions and ownership of content).
If you want the Administrators group to have full control, just set it that way. In Windows XP Pro or some other system that gives you a Security tab in Properties, use it. In the drive's security properties, add Administrators (if it's not already there), and in the privileges for Administrators give full control and enable all inheritance. You just have to set that on one machine and then other NT-based Windows PCs will obey the settings.
If you can't find one Pro system to use for that setting, then you'll have to learn the cacls command line. Fortunately you still just have to do it once. Oops. You'll have to do it n times where the first (n-1) times are various mistakes, but you just have to get it right once.
The permission scheme you choose for your HDD depends on the filesystem you've formatted the drive with. Different filesystems specify permissions differently and have to be treated separately.
Why are you using permissions at all? If someone gets the drive then they have access. Instead, just use something like truecrypt to protect everything, and give everyone permissions to everything in the truecrypt volume.