I'm trying to write a UnmatchedClassAndFilename diagnostic and code fix using the new Roslyn and Visual Studio API's. The idea is to rename a class or filename in case they aren't equal.
How can I use the Roslyn API to rename a file in Visual Studio? The Workspace class doesn't seem to support this.
Update: Created an issue at CodePlex (https://roslyn.codeplex.com/workitem/258)
No, there is no current support for this in the Workspaces API. It's a common request but I'm not sure we have something explicitly tracking that work, so feel free to file the bug on CodePlex.
I am using Visual Studio 2017 and the Code Refactoring VSIX project template to accomplish this.
Here is my code:
private async Task<Solution> ConvertTypeNameToPascalCaseAsync(Document document, TypeDeclarationSyntax typeDecl, CancellationToken cancellationToken)
{
// Produce a PascalCased version of the type declaration's identifier token.
var identifierToken = typeDecl.Identifier;
var newName = identifierToken.Text.ToPascalCase();
// Get the symbol representing the type to be renamed.
var semanticModel = await document.GetSemanticModelAsync(cancellationToken);
var typeSymbol = semanticModel.GetDeclaredSymbol(typeDecl, cancellationToken);
// Produce a new solution that has all references to that type renamed, including the declaration.
var originalSolution = document.Project.Solution;
var optionSet = originalSolution.Workspace.Options;
var newSolution = await Renamer.RenameSymbolAsync(document.Project.Solution, typeSymbol, newName, optionSet, cancellationToken).ConfigureAwait(false);
var newDocId = DocumentId.CreateNewId(document.Project.Id);
var newText = await newSolution.GetDocument(document.Id).GetTextAsync(cancellationToken).ConfigureAwait(false);
// rename document by adding a new document with the new name and removing the old document
newSolution = newSolution.AddAdditionalDocument(newDocId, newName + ".cs", newText);
newSolution = newSolution.RemoveDocument(document.Id);
// Return the new solution with the now PascalCased type name.
return newSolution;
}
Note: ToPascalCase() is an extension method that I added to the string class.
The major point to notice is that I used AddAdditionalDocument() and RemoveDocument() to effectively rename the existing document to match my new name.
Here is the code that sets up the Code Refactoring engine:
public sealed override async Task ComputeRefactoringsAsync(CodeRefactoringContext context)
{
var root = await context.Document.GetSyntaxRootAsync(context.CancellationToken).ConfigureAwait(false);
// Find the node at the selection.
var node = root.FindNode(context.Span);
// Only offer a refactoring if the selected node is a type declaration node.
var typeDecl = node as TypeDeclarationSyntax;
if (typeDecl == null)
{
return;
}
if (typeDecl.Identifier.Text.IsUpper())
{
// For any type declaration node, create a code action to reverse the identifier text.
var action = CodeAction.Create("Convert type name to PascalCase", c => ConvertTypeNameToPascalCaseAsync(context.Document, typeDecl, c));
// Register this code action.
context.RegisterRefactoring(action);
}
}
Note: IsUpper() is also an extension method that I added to the string class.
Incidentally, my specific use case is to convert all caps class names with underscores in them to PascalCased class names. Examples:
TEST = Test
TEST_CLASS = TestClass
TEST_A_CLASS = TestAClass
Related
I am upgrading a .net45 app to .net core 3.1 and I have a piece of code there like below.
private void GetContainerDirectories(IEnumerable<IListBlobItem> blobList)
{
// First list all the actual FILES within
// the current blob list. No recursion needed:
foreach (var item in blobList.Where
((blobItem, type) => blobItem is CloudBlockBlob))
{
var blobFile = item as CloudBlockBlob;
sb.Add(new Tree { Name = blobFile.Name, Id = blobFile.Name, ParentId = blobFile.Parent.Prefix, Title = Path.GetFileName(blobFile.Name), IsDirectory = false });
}
// List all additional subdirectories
// in the current directory, and call recursively:
foreach (var item in blobList.Where
((blobItem, type) => blobItem is CloudBlobDirectory))
{
var directory = item as CloudBlobDirectory;
sb.Add(new Tree { Name = directory.Prefix, Id = directory.Prefix, ParentId = directory.Parent.Prefix, Title = new DirectoryInfo(directory.Prefix).Name, IsDirectory = true });
// Call this method recursively to retrieve subdirectories within the current:
GetContainerDirectories(directory.ListBlobs()); ***////////Here i am getting error***
}
}
In the last line [ GetContainerDirectories(directory.ListBlobs()) ], I am getting error for ListBlobs and I am not able to find any useful solution for this. The error like this -
'CloudBlobDirectory' does not contain a definition for 'ListBlobs' and no accessible extension method 'ListBlobs' accepting a first argument of type 'CloudBlobDirectory' could be found (are you missing a using directive or an assembly reference?)
Has anyone any idea how to fix this ? Many thanks in advance :)
The WindowsAzure.Storage SDK you are using is too old, .net core does not support the synchronous methods under this SDK, and the ListBlobs method is a synchronous method.
I suggest you use the latest SDK instead:
https://www.nuget.org/packages/Azure.Storage.Blobs/12.8.0
If you don't want to use Azure.Storage.Blobs SDK, you can use ListBlobsSegmentedAsync method under WindowsAzure.Storage SDK
Update:
You can use the code below to instead of your original code:
var blobs = directory.ListBlobsSegmentedAsync(false, BlobListingDetails.Metadata, 100, null, null, null).Result.Results;
GetContainerDirectories(blobs);
I'm trying to work with text files in the apps folder.
Here's my GoogleApiClient constructor:
googleApiClient = new GoogleApiClient.Builder(this)
.AddApi(DriveClass.API)
.AddScope(DriveClass.ScopeFile)
.AddScope(DriveClass.ScopeAppfolder)
.UseDefaultAccount()
.AddConnectionCallbacks(this)
.EnableAutoManage(this, this)
.Build();
I'm connecting with:
googleApiClient.Connect()
And after:
OnConnected()
I need to list all files inside the app folder. Here's what I got so far:
IDriveFolder appFolder = DriveClass.DriveApi.GetAppFolder(googleApiClient);
IDriveApiMetadataBufferResult result = await appFolder.ListChildrenAsync(googleApiClient);
Which is giving me the files metadata.
But after that, I don't know how to read them, edit them or save new files. They are text files created with my app's previous version (native).
I'm following the google docs for drive but the Xamarin API is a lot different and has no docs or examples. Here's the API I'm using: https://components.xamarin.com/view/googleplayservices-drive
Edit:
Here is an example to read file contents from the guide:
DriveFile file = ...
file.open(mGoogleApiClient, DriveFile.MODE_READ_ONLY, null)
.setResultCallback(contentsOpenedCallback);
First I can't find anywhere in the guide what "DriveFile file = ..." means. How do I get this instance? DriveFile seems to be a static class in this API.
I tried:
IDriveFile file = DriveClass.DriveApi.GetFile(googleApiClient, metadata.DriveId);
This has two problems, first it complains that GetFile is deprecated but doesn't say how to do it properly. Second, the file doesn't have an "open" method.
Any help is appreciated.
The Xamarin binding library wraps the Java Drive library (https://developers.google.com/drive/), so all the guides/examples for the Android-based Drive API work if you keep in mind the Binding's Java to C# transformations:
get/set methods -> properties
fields -> properties
listeners -> events
static nested class -> nested class
inner class -> nested class with an instance constructor
So you can list the AppFolder's directory and files by recursively using the Metadata when the drive item is a folder.
Get Directory/File Tree Example:
await Task.Run(() =>
{
async void GetFolderMetaData(IDriveFolder folder, int depth)
{
var folderMetaData = await folder.ListChildrenAsync(_googleApiClient);
foreach (var driveItem in folderMetaData.MetadataBuffer)
{
Log.Debug(TAG, $"{(driveItem.IsFolder ? "(D)" : "(F)")}:{"".PadLeft(depth, '.')}{driveItem.Title}");
if (driveItem.IsFolder)
GetFolderMetaData(driveItem.DriveId.AsDriveFolder(), depth + 1);
}
}
GetFolderMetaData(DriveClass.DriveApi.GetAppFolder(_googleApiClient), 0);
});
Output:
[SushiHangover.FlightAvionics] (D):AppDataFolder
[SushiHangover.FlightAvionics] (F):.FlightInstrumentationData1.json
[SushiHangover.FlightAvionics] (F):.FlightInstrumentationData2.json
[SushiHangover.FlightAvionics] (F):.FlightInstrumentationData3.json
[SushiHangover.FlightAvionics] (F):AppConfiguration.json
Write a (Text) File Example:
using (var contentResults = await DriveClass.DriveApi.NewDriveContentsAsync(_googleApiClient))
using (var writer = new OutputStreamWriter(contentResults.DriveContents.OutputStream))
using (var changeSet = new MetadataChangeSet.Builder()
.SetTitle("AppConfiguration.txt")
.SetMimeType("text/plain")
.Build())
{
writer.Write("StackOverflow Rocks\n");
writer.Write("StackOverflow Rocks\n");
writer.Close();
await DriveClass.DriveApi.GetAppFolder(_googleApiClient).CreateFileAsync(_googleApiClient, changeSet, contentResults.DriveContents);
}
Note: Substitute a IDriveFolder for DriveClass.DriveApi.GetAppFolder to save a file in a subfolder of the AppFolder.
Read a (text) File Example:
Note: driveItem in the following example is an existing text/plain-based MetaData object that is found by recursing through the Drive contents (see Get Directory/File list above) or via creating a query (Query.Builder) and executing it via DriveClass.DriveApi.QueryAsync.
var fileContexts = new StringBuilder();
using (var results = await driveItem.DriveId.AsDriveFile().OpenAsync(_googleApiClient, DriveFile.ModeReadOnly, null))
using (var inputStream = results.DriveContents.InputStream)
using (var streamReader = new StreamReader(inputStream))
{
while (streamReader.Peek() >= 0)
fileContexts.Append(await streamReader.ReadLineAsync());
}
Log.Debug(TAG, fileContexts.ToString());
I am trying to import Attachments/Annotations to CRM Dynamics, I am doing this using the SDK.
I am not using the data import wizard.
I am not individually creating Annotation entities, instead I am using Data Import Feature programmatically.
I mostly leveraged the DataImport sample from the SDK sample code (SDK\SampleCode\CS\DataManagement\DataImport).
Import import = new Import()
{
ModeCode = new OptionSetValue((int)ImportModeCode.Create),
Name = "Data Import"
};
Guid importId = _serviceProxy.Create(import);
_serviceProxy.Create(
new ColumnMapping()
{
ImportMapId = new EntityReference(ImportMap.EntityLogicalName, importMapId),
ProcessCode = new OptionSetValue((int)ColumnMappingProcessCode.Process),
SourceEntityName = sourceEntityName,
SourceAttributeName = sourceAttributeName,
TargetEntityName = targetEntityName,
TargetAttributeName = targetAttributeName
});
I am getting an error "The reference to the attachment could not be found".
The documentation says the crm async service will find the physical file on disk and upload it, my question is where does the async service look for attachment files?
I tried to map documentbody field to the full path of the attachment on the desk, but that still didn't work.
The answer below was provided before the question edits clarifying the use of the import wizard instead of the SDK. The answer below is specific to using the SDK.
When you are attaching files to an Annotation (Note) record in CRM via the SDK, you do use the documentbody attribute (along with mimetype), but you have to first convert it base64.
Something like this:
var myFile = #"C:\Path\To\My\File.pdf";
// Do checks to make sure file exists...
// Convert to Base64.
var base64Data = Convert.ToBase64String(System.IO.File.ReadAllBytes(myFile));
var newNote = new Entity("annotation");
// Set subject, regarding object, etc.
// Add the data required for a file attachment.
newNote.Attributes.Add("documentbody", base64Data);
newNote.Attributes.Add("mimetype", "text/plain"); // This mime type seems to work for all file types.
orgService.Create(newNote);
I found the solution in an obscure blog post, I think the documentation is misleading or unclear, the way this whole thing works, makes having the files available on the server disk for the async to process, odd.
To follow the same principle, all contents should be sent like the csv file itself while being linked to the same import.
To solve this we need create individual special Internal ImportFile for each physical attachment, and link it to the import that has the attachments record details.
As you see below with linking the attachments ImportFile using the ImportId and then setting the two properties (ProcessCode and FileTypeCode), it all worked in the end.
Suffice to say using this method is much more efficient and quicker than individually creating Annotation records.
foreach (var line in File.ReadLines(csvFilesPath + "Attachment.csv").Skip(1))
{
var fileName = line.Split(',')[0].Replace("\"", null);
using (FileStream stream = File.OpenRead(attachmentsPath + fileName))
{
byte[] byteData = new byte[stream.Length];
stream.Read(byteData, 0, byteData.Length);
stream.Close();
string encodedAttachmentData = System.Convert.ToBase64String(byteData);
ImportFile importFileAttachment = new ImportFile()
{
Content = encodedAttachmentData,
Name = fileName,
ImportMapId = new EntityReference(ImportMap.EntityLogicalName, importMapId),
UseSystemMap = true,
ImportId = new EntityReference(Import.EntityLogicalName, importId),
ProcessCode = new OptionSetValue((int)ImportFileProcessCode.Internal),
FileTypeCode = new OptionSetValue((int)ImportFileFileTypeCode.Attachment),
RecordsOwnerId = currentUserRef
};
_serviceProxy.Create(importFileAttachment);
}
idx++;
}
I'm working on a project team and our application is in TFS. I'm attempting to determine how many lines of code each team member is responsible. In TFS, I'm aware of the Annotate feature in the Visual Studio interface which allows you to see who last modified each line of code so I know TFS has this information.
I've written a small console app which accesses my TFS project and all its files, but I now need to programmatically access annotations so I can see who the owner of each line is. Here is my existing code:
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.VersionControl.Client;
public class Program
{
static void Main(string[] args)
{
var credentials = new NetworkCredential(username, password, domain);
var server = new TfsTeamProjectCollection(new Uri(serverUrl), credentials);
var version = server.GetService(typeof(VersionControlServer)) as VersionControlServer;
var items = version.GetItems(projectPath, RecursionType.Full);
var fileItems = items.Items.Where(x => x.ItemType == ItemType.File);
foreach (var fileItem in fileItems)
{
var serverItem = fileItem.ServerItem;
//TODO: retrieve and parse annotations
}
}
}
I can't seem to figure out how to retrieve annotations once I have the TFS item. This link explains how to do it by calling TFPT, but after implementing it (tfpt annotate /noprompt <filename>), you are only give the last changeset and code per line, not the owner.
I also found a Microsoft.TeamFoundation.VersionControl.Server namespace that has an Annotation class. I installed TFS on my machine to have access to that DLL, but it doesn't seem like it is of any help to this problem.
How can you programmatically access TFS annotations to determine the owner of a line of code for a file?
You may have to query the branch when a Item's change type is Branch.
For a simple example, there is a scenario
$/Project
/Main`
/a.txt
/Develop
/a.txt (branched from main)
When you query the history of $/project/Develop/a.txt, you can also get the history of $/project/Main/a.txt using following code
void GetAllHistory(string serverItem)
{
var changesets=vcs.QueryHistory(serverItem,
Microsoft.TeamFoundation.VersionControl.Client.VersionSpec.Latest,
0,
Microsoft.TeamFoundation.VersionControl.Client.RecursionType.None,
null,
new Microsoft.TeamFoundation.VersionControl.Client.ChangesetVersionSpec(1),
Microsoft.TeamFoundation.VersionControl.Client.VersionSpec.Latest,
int.MaxValue,
true,
false);
foreach (var obj in changesets)
{
Microsoft.TeamFoundation.VersionControl.Client.Changeset cs = obj as Microsoft.TeamFoundation.VersionControl.Client.Changeset;
if (cs == null)
{
return;
}
foreach (var change in cs.Changes)
{
if (change.Item.ServerItem != serverItem)
{
return;
}
Console.WriteLine(string.Format("ChangeSetID:{0}\tFile:{1}\tChangeType:{2}", cs.ChangesetId,change.Item.ServerItem, change.ChangeType));
if ((change.ChangeType & Microsoft.TeamFoundation.VersionControl.Client.ChangeType.Branch) == Microsoft.TeamFoundation.VersionControl.Client.ChangeType.Branch)
{
var items=vcs.GetBranchHistory(new Microsoft.TeamFoundation.VersionControl.Client.ItemSpec[]{new Microsoft.TeamFoundation.VersionControl.Client.ItemSpec(serverItem, Microsoft.TeamFoundation.VersionControl.Client.RecursionType.None)},
Microsoft.TeamFoundation.VersionControl.Client.VersionSpec.Latest);
GetAllHistory(items[0][0].Relative.BranchToItem.ServerItem);
}
}
}
}
My specific problem is how can I automate "add-migration" in a build process for the Entity Framework. In researching this, it seems the mostly likely approach is something along the lines of automating these steps
Open a solution in Visual Studio 2013
Execute "Add-Migration blahblah" in the Package Manager Console (most likely via an add-in vsextention)
Close the solution
This initial approach is based on my own research and this question, the powershell script ultimately behind Add-Migration requires quite a bit of set-up to run. Visual Studio performs that setup automatically when creating the Package Manager Console and making the DTE object available. I would prefer not to attempt to duplicate that setup outside of Visual Studio.
One possible path to a solution is this unanswered stack overflow question
In researching the NuGet API, it does not appear to have a "send this text and it will be run like it was typed in the console". I am not clear on the lines between Visual Studio vs NuGet so I am not sure this is something that would be there.
I am able to find the "Pacakage Manager Console" ironically enough via "$dte.Windows" command in the Package Manager Console but in a VS 2013 window, that collection gives me objects which are "Microsoft.VisualStudio.Platform.WindowManagement.DTE.WindowBase". If there is a way stuff text into it, I think I need to get it to be a NuGetConsole.Implementation.PowerConsoleToolWindow" through reviewing the source code I am not clear how the text would stuffed but I am not at all familiar with what I am seeing.
Worst case, I will fall back to trying to stuff keys to it along the lines of this question but would prefer not to since that will substantially complicate the automation surrounding the build process.
All of that being said,
Is it possible to stream commands via code to the Package Manager Console in Visual Studio which is fully initialized and able to support an Entity Framework "add-migration" command?
Thanks for any suggestions, advice, help, non-abuse in advance,
John
The approach that worked for me was to trace into the entity framework code starting in with the AddMigrationCommand.cs in the EntityFramework.Powershell project and find the hooks into the EntityFramework project and then make those hooks work so there is no Powershell dependency.
You can get something like...
public static void RunIt(EnvDTE.Project project, Type dbContext, Assembly migrationAssembly, string migrationDirectory,
string migrationsNamespace, string contextKey, string migrationName)
{
DbMigrationsConfiguration migrationsConfiguration = new DbMigrationsConfiguration();
migrationsConfiguration.AutomaticMigrationDataLossAllowed = false;
migrationsConfiguration.AutomaticMigrationsEnabled = false;
migrationsConfiguration.CodeGenerator = new CSharpMigrationCodeGenerator(); //same as default
migrationsConfiguration.ContextType = dbContext; //data
migrationsConfiguration.ContextKey = contextKey;
migrationsConfiguration.MigrationsAssembly = migrationAssembly;
migrationsConfiguration.MigrationsDirectory = migrationDirectory;
migrationsConfiguration.MigrationsNamespace = migrationsNamespace;
System.Data.Entity.Infrastructure.DbConnectionInfo dbi = new System.Data.Entity.Infrastructure.DbConnectionInfo("DataContext");
migrationsConfiguration.TargetDatabase = dbi;
MigrationScaffolder ms = new MigrationScaffolder(migrationsConfiguration);
ScaffoldedMigration sf = ms.Scaffold(migrationName, false);
}
You can use this question to get to the dte object and from there to find the project object to pass into the call.
This is an update to John's answer whom I have to thank for the "hard part", but here is a complete example which creates a migration and adds that migration to the supplied project (project must be built before) the same way as Add-Migration InitialBase -IgnoreChanges would:
public void ScaffoldedMigration(EnvDTE.Project project)
{
var migrationsNamespace = project.Properties.Cast<Property>()
.First(p => p.Name == "RootNamespace").Value.ToString() + ".Migrations";
var assemblyName = project.Properties.Cast<Property>()
.First(p => p.Name == "AssemblyName").Value.ToString();
var rootPath = Path.GetDirectoryName(project.FullName);
var assemblyPath = Path.Combine(rootPath, "bin", assemblyName + ".dll");
var migrationAssembly = Assembly.Load(File.ReadAllBytes(assemblyPath));
Type dbContext = null;
foreach(var type in migrationAssembly.GetTypes())
{
if(type.IsSubclassOf(typeof(DbContext)))
{
dbContext = type;
break;
}
}
var migrationsConfiguration = new DbMigrationsConfiguration()
{
AutomaticMigrationDataLossAllowed = false,
AutomaticMigrationsEnabled = false,
CodeGenerator = new CSharpMigrationCodeGenerator(),
ContextType = dbContext,
ContextKey = migrationsNamespace + ".Configuration",
MigrationsAssembly = migrationAssembly,
MigrationsDirectory = "Migrations",
MigrationsNamespace = migrationsNamespace
};
var dbi = new System.Data.Entity.Infrastructure
.DbConnectionInfo("ConnectionString", "System.Data.SqlClient");
migrationsConfiguration.TargetDatabase = dbi;
var scaffolder = new MigrationScaffolder(migrationsConfiguration);
ScaffoldedMigration migration = scaffolder.Scaffold("InitialBase", true);
var migrationFile = Path.Combine(rootPath, migration.Directory,
migration.MigrationId + ".cs");
File.WriteAllText(migrationFile, migration.UserCode);
var migrationItem = project.ProjectItems.AddFromFile(migrationFile);
var designerFile = Path.Combine(rootPath, migration.Directory,
migration.MigrationId + ".Designer.cs");
File.WriteAllText(designerFile, migration.DesignerCode);
var designerItem = project.ProjectItems.AddFromFile(migrationFile);
foreach(Property prop in designerItem.Properties)
{
if (prop.Name == "DependentUpon")
prop.Value = Path.GetFileName(migrationFile);
}
var resxFile = Path.Combine(rootPath, migration.Directory,
migration.MigrationId + ".resx");
using (ResXResourceWriter resx = new ResXResourceWriter(resxFile))
{
foreach (var kvp in migration.Resources)
resx.AddResource(kvp.Key, kvp.Value);
}
var resxItem = project.ProjectItems.AddFromFile(resxFile);
foreach (Property prop in resxItem.Properties)
{
if (prop.Name == "DependentUpon")
prop.Value = Path.GetFileName(migrationFile);
}
}
I execute this in my project template's IWizard implementation where I run a migration with IgnoreChanges, because of shared entites with the base project. Change scaffolder.Scaffold("InitialBase", true) to scaffolder.Scaffold("InitialBase", false) if you want to include the changes.