While setting up TFS for another user, I entered my credentials to connect to the server temporarily. Well, it still refers to my user name for the working folder. I even logged in on his machine and removed the workspace reference, but it's still giving him errors that I already mapped the folder to my account name for that project. How can I remove that mapping?
You can use the TFS Sidekicks of Attrice to delete the workspace.
http://www.attrice.info/cm/tfs/index.htm
Log on as yourself and delete the workspace. Or, have him create a new workspace mapped to a new directory. Workspaces are very flexible.
Related
In my Jenkins home dir I see these configs for all my users:
"c:\Program Files (x86)\Jenkins\users\someuser\config.xml"
What is this config? Looks like its caching their sessions or something? Do I need to backup these files? What woudl the impact be if the users folder got deleted?
I should add that Im using active directory for auth so these aren't internal jenkins users they are AD users. Which is why Im wondering what config jenkins is keeping for them
You should backup those indeed. These folders contain user-specific configuration data, like public keys or api access tokens (see user's "configure" menu in the Web interface).
First I'm not the user using this but am implementing it for a couple of users.
We use VDi machines with all users profiles on the server. I have managed to clone the Git Repo and leave a copy on the server which I use Robo copy to copy to the users.
This has worked great but we are facing an issue when they want to change some settings we get an error. The Setting do work great if in the config file it is pointing to the UNC path (\domian.local\share\users\username) but if it points to the drive lette of the share (t:\users\username) or c drive (c:\users\username) we get an error.
I'll look for the errors and upload it.
Cheers
Isaac
Recently, my TFS server changed from Physical to Virtual by infrastructure team, and same time, they moved all users from one domain to another. Earlier, suppose, I connected to TFS and mapped to local drive with domain1\user1, new one is now domain2\user1.
I am able to connect to TFS, however; not able to map same drive which I earlier mapped using old drive.
Due to this, I am unable to checkin existing changes and mapping is not working.
What can be done in this situation to get mapped same local path along with existing checkouts by previous user.
Try to run Workspaces Command to update UserName:
tf workspaces [/collection:TeamProjectCollectionUrl] [/updateUserName:oldUserName]
/updateUserName option updates security identification information on the Team Foundation server for a user whose network user name has been changed.
I am currently building a C# WebApi 2 application that I will be uploading to an Amazon Elastic Beanstalk instance to deploy. I am having success so far, and on my local machine, I just finished testing the file upload capability in order for clients to upload images.
The way it goes is I accept the multipart/formdata in the Web Api and save the temp file (with a random name like BodyPart_24e246c7-a92a-4a3d-84ef-c1651416e667) to the App_Data folder. The temporary file is put into an S3 Bucket and I create a reference in my SQL Server database to it.
Testing works fine with single or multiple file uploads locally but when I deploy the application to Elastic Beanstalk and try to upload I get errors like "Could not find a part of the path 'C:\inetpub\wwwroot\sbeAPI_deploy\App_Data\BodyPart_8f552d48-ed9b-4ec2-9986-88cbffd673ee'" or a similar one saying access is denied altogether.
I have been trying to find the solution online for a few hours now, but the AWS documentation is all over the place and tutorials/other questions seem to be outdated. I believe it has something to do with not having permission to write the temporary files on the EC2 server, but I can't figure out how to fix it.
Thanks very much in advance.
This is already possible since April 2013, see also here: Basically the steps you need to perform are the following:
Create a folder called .ebextensions in the top-level of your project through the solution explorer
Add in this folder your configuration file e.g myapp.config (replace myapp with your Elastic Beanstalk's app name)
Add the code displayed underneath to this configuration file you just created. Replace MyApp with your project name (not solution name) displayed in Visual Studio
All set!! Be sure there's a file within App_Data otherwise Visual Studio won't publish it.
{
"containercommands": {
"01-changeperm": {
"command": "icacls \"C:/inetpub/wwwroot/MyApp_deploy/App_Data\" /grant DefaultAppPool:(OI)(CI)"
}
}
}
To give write permission to your DefaultAppPool you can
create an .ebextensions folder
create a config file and place it in your .ebextensions folder
This will change permission to your wwwroot folder
container_commands:
01-changeperm :
command : 'icacls "C:\\inetpub\\wwwroot" /grant "IIS APPPOOL\DefaultAppPool:(OI)(CI)F"'
I had the same problem (unable to write to a file in the App_Data folder of my web application on Elastic Beanstalk).
In my case it was sufficient to create a dummy file in the App_Data folder in my Visual Studio project. When I did this, the App_Data folder was created during deployment with permissions that allow the web application to write to it.
No need for .ebextensions to change folder permissions.
The App_Data folder does not have write permissions by default, and you would have to set appropriate permissions during deployment of your apps.
Check out this post for a detailed explanation of how to do it: http://thedeveloperspace.com/granting-write-access-to-asp-net-apps-hosted-on-aws-beanstalk/
This question is pretty old but for anyone else who ends up having the same issue. I had the same issue with AWS. Connect to your instance and change the properties for the folder you want to upload files to. Select the folder you want to grant read/write access to. Click on properties and set the permissions that way.
My issue was with uploading images to the server. I couldn't put it in the App_Data folder since that is a special offer reserved for the app only and I needed the images to be accessible through the URL. So I created another folder "Uploads". Published my api then connected to the instance through remote desktop. Located the Content folder, and set the properties to read/write for DefaultAppPool. That solved my problem, hope this helps someone out there.
I need to create a temporary folder that will be accessed from the application only.
It means that even current user and system administrator are not able to open it from the Explorer.
While application is running, there'll be some files put into it. Once it's terminated, folder and all its content are deleted (again programmatically, no manual delete ability).
P.S. I found few posts here but there are no proper solution given yet.
Thanks in advance.
Windows security does not work that way. You cannot restrict access by application, only by user. If you want only your app to have access to a given resource then you have to create a new user account, configure the resource to grant access to only that user, and then run your app using that user, or at least have your app impersonate that user when needed. Files/folders are securable objects, and Windows security is based on user accounts.