I would like to secure folder, so that no one can cut or copy any file or contents of file without "secure" password (or happy to get rid of password bit as well, so no one can cut, copy or move any file or file contents from folder). Also, if all files and folders inside my root folder can be deleted after certain number of days, that will be great. This is to stop people from copying and distributing my files to others without my permission and folder contents to "expire" after certain number of days (e.g. 7 days).
Currently, I manually copy folder to other people's machine, so I do have physical access to their machines.
PS. I am happy to write a script as well, in case there is a way to execute script everytime I open the folder.
I understand, I can't stop people from stealing file contents by manually typing file contents to other file or taking photos of file contents, however I want to make it harder of them.
This is not a PowerShell issue, nor a solution provided by PowerShell. This is an data risk management issue as well as a reality check.
Don't get me wrong, you can write a scrip that encrypts data,
https://blogs.technet.microsoft.com/heyscriptingguy/2015/03/06/powertip-encrypt-files-with-powershell
Even just use EFS, but each of those have several limitations.
https://technet.microsoft.com/en-us/library/bb457116.aspx
Then there are password encrypted zip files. But.....
None of the above stop cut/copy/paste/print and there is no way to make them.
Here is the simple truth to data security which I deliver at all my public speaking engagements and customer deployment engagements.
Nothing can defeat and ocular attack. Meaning...
'If I can see your data, I can take your data.'
It may take me longer than being able to just bulk exfiltrate you data (copy to a USB, CD, DVD, native print, etc), but I can just take a picture, photo copy it, screen grab it from another device, manually write it down.
Either method allows me to walk away with it and give it to whomever.
You can only mitigate / slow down / prevent bulk exfiltration using DLP/RMS protection solutions.
Why are you putting this manually on their systems, vs hosting it in the cloud where they can access it. If you do this in MS Azure, you can leverage Azure Information Protection.
RMS for individuals and Azure Information Protection
RMS for individuals is a free self-service subscription for users in
an organization who need to open files that have been protected by the
Azure Rights Management service from Azure Information Protection. If
these users cannot be authenticated by Azure Active Directory and
their organization does not have Active Directory Rights Management
(AD RMS), this free sign-up service can create an account in Azure
Active Directory for a user. As a result, these users can now
authenticate by using their company email address and then read the
protected files on computers or mobile devices.
https://learn.microsoft.com/en-us/information-protection/understand-explore/rms-for-individuals
Why are you not heavily watermarking your data?
Putting passwords on files and folders do not prevent that ocular attack.
Neither does DLP/RMS. You can apply cut/copy/paste/print policies, remove access after a certain date, restrict access as per the feature set using policies.
Yet, again, this is just prevention against the bulk dumping / sharing of your data. Not the fine grained, patient, write it down or capture from a remote camera approach. Even if you block cut / copy / paste from the host, I can bring that host up is a screen sharing - think remote desktop, and screen shoot in the RDP session. Meaning, using the host tools that I use to connect to an RDP destination. Heck I create a webcast and share it with a group, meaning, I open it on my system and let people view it with me.
No DLP solution is 100%. Anyone telling you this is lying.
As one that has been doing Info/CyberSec for almost 2 decades, evaluated, deployed and used several DLP solutions, what I state here is from experience. DLP is important, and business must look to it as another mitigation in their risk strategies, but must do so with real vision and reality.
No matter who it is from, no technology can prevent this ocular avenue. If you don't want your data leaving your control, then don't share it. Yet, since you are in the education business, that is not an option.
I'll say it again, and again...
'If I can see your data, I can take your data.'
Related
I have an application (in a pure windows environment) that needs to store sensitive data so that other workstations with the same applications can access that data.
At the moment that's done using a central server with a SMB network share and encrypted files.
All (windows) users that use our application have to have read/write access to one central shared folder and this way data is stored and exchanged.
This configuration has one big drawback: Not only the application but also all users of our application have full access to that shared folder.
Ok, they can't read the sensitive data, as it is encrypted, but - given some criminal energy or stupidity - they can simply open a windows explorer, navigate to that shared folder and delete files there.
I tried but didn't manage to open the SMB-share only for my application - as soon as my application authenticates there, also the current windows user has access.
(I tried using WNetAddConnection2, but as soon as the authentication happened, the connection is opened also for all other programs. And if I don't map the SMB folder to a drive letter, I can not even disconnect the drive again)
Are there possibilities to authenticate only a process or a thread and not the current user for access to a network share?
Or are there performant alternatives to SMB shares? One data record is something between 100 and 900 MB in size. Therefore I need support for random access reading/writing to the files.
Using SFTP and pumping the entire data to the workstation when opening and sending everything back when closing is not an option. That would stress the network and if the application crashes, all changes are lost where when using "normal" access only the data in the network cache is lost.
Any recommendations?
Are there possibilities to authenticate only a process or a thread and not the current user for access to a network share?
No. Windows' security model is based on users, not applications. To apply rights on a per-process basis, you would have to run the application as a given user. To apply rights on a per-thread basis, you would have to impersonate a given user before doing the work, and then revert the impersonation when finished.
First of all, if this question should go into another stackexchange site please let me know.
I have a computer that I used for a lot of years, so it has a lot of stored password, cookies, etc in my Google Chrome folder. I recently bought a new computer and wanted to keep everything that I had before, specially my cookies, extensions, etc.
At first, I just copied over the %APPDATA%\local\google folder from my old computer to my new one. When I launched Chrome i could see my history, extensions, etc. but when I went to common sites like facebook, gmail, etc. it was asking me to login.
I then went and read about how Chrome encrypts that data with DPAPI and so I changed my password and username in my new computer to match my old one, and then copied the folder over again but still nothing.
So, I read some more and discovered that DPAPI uses a master key file, so I went ahead and copied over the %APPDATA%\roaming\microsoft folder, which should contain that file, over to my new PC. So now I have the same password, username, and master key file, but I still can't get it to work. It is asking me to log in everytime instead of using the cookies/saved passwords.
Does anyone know what else I am missing to have Chrome be able to decrypt those things when I go to a website?
Again, if there is another site that would fit this better, please let me know. Thank you.
As to the DPAPI aspect: the S-identifier (which is an internal "LSA-name" (LSA=local security authority subsystem), it's the name of the folder that the mastery files reside in under the Protect directory) for the user within the Windows OS) on both computers is also used in deriving the key from the user password to decrypt the master key files. So these cannot be used on any other computer (as the majority part of the S-identifier is randomly generated when the user is created on the PC, and cannot be set manually, I believe). Using open source tools one could in theory re-encrypt the master keys of the old PC to make them valid on the new PC, but frankly that's a PITA. And still you'd have to mess a bit with the most recent masterkey files etc. No 100% guarantee there even.
Within Chrome itself, when you have it open on the old PC, you can export all the password info some structured format, and then transfer them to the new PC (say by USB) and import them there into Chrome again. You could also turn on Chrome syncing (which requires a Google login) and let "the cloud" handle the transfer (password syncing is optional there, and you can choose for encryption by Google credentials as an extra security option; I don't know the internals for that mechanism. Most password managers also offer a way to sync passwords between browsers on different computers, especially if they already offer browser integration. The export-import option seems the most practical to me.
I want to send info between a desktop/laptop/tablet app and Windows Phone. One possibility is to send data to the SkyDrive account and have the other end pick it up from there. Is this feasible? What I have in mind is the "Windows 8" app running on the desktop, laptop, or tablet allowing the Windows Phone app[s] to send data to its account. Is this possible, such as by providing the Windows Phone app with the Skydrive login info, or...???
From all the other questions you've posted around this query, it sounds like you want to put a mechanism in place to communicate between a Windows 8 app and a windows phone app. I would recommend you look at building a service to handle the communication instead of trying to leverage mechanisms that weren't designed for what you want to achieve.
In direct answer to the this question, though, you can probably achieve it in this manner, but what happens if the user deletes the file you create?
So, SkyDrive is unique to a user, not a device. This means if your application is running on more than one device you can use SkyDrive as a shared, unified storage option. Not just for files but also for application settings. There's an SDK for every platform, not just MS.
Here's what you need to consider.
The roaming API in Windows 8 puts information in a protected area of SkyDrive. As a result, the user cannot delete or screw up the files stored there. To that end, using SkyDrive as a shared location (like you are asking) doesn't have this benefit. The user can screw with your files or delete them - and wreck your app. There is no such thing as protecting your app files in SkyDrive (at this time).
Specifically, to your question:
The authorization model for SkyDrive requires a token that cannot be practically cached for any app. Also, you cannot cache credentials because you never get the credentials in the first place - you only get the resulting token. Listen, you would violate every possible best practice if you //asked// the user for their username and password and stored them. Please do not do this.
The final answer is this: an app on multiple devices can use SkyDrive as a shared storage solution for files and settings (like XML files) - but the developer needs to understand the risk and mitigate that (mitigation might be easy for your app). The user, on every device, would need to sign in and grant each application access to it folders. And, that's it.
I've written an application and I'd like to add a registration key/serial number to it (I'm big on minimum inconvinience - ala #4 at this Eric Sink article). My question is about where to store the "activation" once the application has been registered. As I understand it, I have a trade-off between storing the key in a public place, where all users can read it (but which requires admin rights to save there) and storing a per-user activation (but then each user on the computer will have to activate independantly). That gives me two choices:
Some user, with local admin rights, activates the product. The activation is stored in HKLM, in the program files folder, or somewhere else where all users can read it, and the product is activated for all users.
A user (with or without admin rights) activates the product. The activation is stored somewhere user-centric (per-user app.config, HKCU, etc). The plus is that the user doesn't have to be an admin. The downside is that if there are 6 users who use the computer, each has to activate the product. They can each re-use the same serial, but they still have to enter it.
Is this really the trade-off? If it is, what have others done? As a developer, I'm used to people being a local admin, but in the real-world, I don't expect many of my corporate users to be local admins, which makes me lean towards option 2. Are computers not shared often enough that I shouldn't be concerned?
Again, I'm not asking about how to physically register a computer - I'm not worried about it. I'm only going to checksum the key provided and give the go-ahead, as I want to be as non-invasive as possible.
I would recommend a solution that does not require admin rights. Lots of users, especially in shared environments, won't have those rights and won't be able to find anyone with them conveniently.
Also, going forward a few years, I think it will be getting increasingly unusual to have admin rights on the computer you are using, as the security situation improves.
Registry seems to be an okay solution for business software. At least at where I used to work, regular user will not be a local computer administrator, so each installation will require local administrator account. This is a good thing since it will lessen the headache of your support staff from people installing just about everything in your business computing environment. The trade off is of course, user will be pissed that they can't install stuff or have to contact support to do it, but hey... :)
Other stuffs:
USB / other type of dongle (ala old
3DMax)
plain old text file (ala
Garmin GPS software on mobile device)
Encode them / rewrite the key into
your binary or part of your binary
(did this trick back in th old DOS
days)
Store them in your own db via web (ala EverQuest / other MMORPG games)
Local key db (ala MathLab I think)
How about using the isolated storrage for you application?
You will have the ability to store this information on a mashiene level for your registration, and the configuration changes can be persisted on a user level.
We save our activation code to the registry for the current user (HKCU) we have had very little problems with it. Our customers run on everything from home computers to thin clients on cooperate networks.
If your software will be used in schools or other educational environments you need to provide some other method. It could be as simple as a separate registration application which will save to the activation for all users. Your software would have to do two registry lookups but that is a small price to pay.
In general, most computers are used by a single user (or multiple people still using the same user account). So a user based storage will work most of the time anyway.
However it's not either/or. There are folder locations that are writable by all users - such as the ProgramData folder. The key is to make the file readable/writable by Everyone so that you can verify the content regardless of the user.
DeployLX Licensing does this for non-secure license data so that it can be used by multiple users without an admin explicitly granting permission.
You should be consistent. If administrator rights were required to install the program, it's not out of line to require administrator rights to register it. Likewise if you somehow managed to install it without administrator rights then register it without too.
If you install and register in one step this won't be an issue.
My app is required to upload a csv and convert to Google Sheets. So we are asking this permission "https://www.googleapis.com/auth/drive" from our user. But some of our users complain we are asking too many permissions. Is there any other settings that we can use to avoid asking too much?
Here are the permission list when user authorizes:
Upload, download, update, and delete files in your Google Drive
Create, access, update, and delete native Google documents in your Google Drive
Manage files and documents in your Google Drive (e.g., search, organize, and modify permissions and other metadata, such as title)
What scope or scopes does my app need?
As a general rule, choose the most restrictive scope possible, and avoid requesting scopes that your app does not actually need. Users more readily grant access to limited, clearly described scopes. Conversely, users may hesitate to grant broad access to their files unless they truly trust your app and understand why it needs the information.
The scope https://www.googleapis.com/auth/drive.file strikes this balance in a practical way. Presumably, users only open or create a file with an app that they trust, for reasons they understand.
https://www.googleapis.com/auth/drive.file Per-file access to files created or opened by the app
Requesting full drive scope for an app
Full access to all files in the user's Drive (https://www.googleapis.com/auth/drive) may be necessary for some apps. An app designed to sync files, for instance, needs this level of access to Drive. Apps with special needs related to listing or reorganizing files might need full scope.
Requesting drive-wide read-only scope for an app
Read-only access to all of a user's Drive files (https://www.googleapis.com/auth/drive.readonly) may be useful for certain apps. For instance, a photo browser might need to reorganize image files in a unique presentation order for a slideshow, or a mobile app might have to work around unique display constraints without needing to write anything. For apps that only need to read file metadata for all files in Drive, there's https://www.googleapis.com/auth/drive.metadata.readonly.
Requesting full drive scope during app development
One common and completely valid case for using full scope is iterative development. It may just be easier to avoid authorization-related constraints and use the full scope while testing your app during development. Then before you actually publish your app, you can back off to the file-level scope or whatever scope you really need for production operation.
Conculsion
That text was ripped directly from Google Drive Scopes page which I use as a rule of thumb when developing drive applications. In your case because you need to be able to upload files I would say you should consider testing a little with the https://www.googleapis.com/auth/drive.file scope, I haven't tried this one before but it sounds like it may work in your instance. Unfortunately I think that is your only other option besides full drive access.