The scenario is this one:
We need to reduce the time of building a solution that contains several projects. We have the constraint that we cannot consolidate projects, thus we are having around 50 projects. The build time at this moment is around 3 minutes. A quick test by setting manually all project references property CopyLocal to False and changing the output directory to a central one improved the building performance by more than 50%. However, the problem is by doing so, when we deploy or run out test on CI, DLLs are missing (I suspect it builds by using the main project and not all projects of the solution).
I thought that I can have 2 sets of build directives. One when developing that will set CopyLocal to false and output into a single directory all DLL, and one when deploying in the CI and Azure website (which preserve the normal DLL location).
I have read in previous post and here that is it possible to the CopyLocal with MsBuild by using a target file (not sure yet about the output directory). Thus, I could have this one use the targets file on the local machine and not when deploying.
My question is how can I have Visual Studio's Build action to use a specific targets file when developing and not when deploying with the IDE to Azure or the CI environment?
You can use a itemgroup with a condition to optionally override the output directory. Or use it to remove the copy-local flag from your Reference items. The same trick works for ProjectReferences.
See:
https://stackoverflow.com/a/1684045/736079
Then make this group conditional using a number of flags:
IsDesktopBuild Is true for a build that's running outside of a build server
BuildingInsideVisualStudio Is true for a build that's running inside VS
See:
https://stackoverflow.com/a/20402204/736079
Putting it all together:
<Target Name="BeforeBuild" Condition="'$(IsDesktopBuild)' != 'true'">
<ItemGroup>
<ProjectReferenceNew Include="#(ProjectReference)">
<Private>False</Private>
</ProjectReferenceNew>
<ProjectReference Remove="#(ProjectReference)"/>
<ProjectReference Include="#(ProjectReferenceNew)"/>
</ItemGroup>
<ItemGroup>
<ReferenceNew Include="#(Reference)">
<Private>False</Private>
</ReferenceNew>
<Reference Remove="#(Reference)"/>
<Reference Include="#(ReferenceNew)"/>
</ItemGroup>
</Target>
To improve performance of this translation, you'll want to specify the Input and Output parameters of the Target:
<Target Name="BeforeBuild" Condition="'$(IsDesktopBuild)' != 'true'"
Inputs="#(Reference);#(ProjectReference)"
Outputs="#(Reference);#(ProjectReference)"
>
....
</Target>
Related
I've got a custom code analysis ruleset that I want to apply to all configurations of multiple projects in my solution but can't see how I can do it.
To be clear, I'm looking for a way (if any) of doing this in a single step rather then editing the properties of each project from the IDE.
I found this guidance so far: https://learn.microsoft.com/en-us/visualstudio/code-quality/how-to-configure-code-analysis-for-a-managed-code-project?view=vs-2019#specify-rule-sets-for-multiple-projects-in-a-solution
But it doesn't seem to be correct. In Visual Studio 2019, if I go to Analyze > Configure Code Analysis > For Solution I get a blank property page with the message:
NOTE: This property page has been deprecated and will be removed in a future product release.
Is there another way I can do this? I have lots of projects :(
Thanks.
I've had no answers about if there's a way to do this in Visual Studio so I've had to resort to altering .csproj files directly in a batch fashion.
This script I found by John Robbins is excellent:
https://www.wintellect.com/batch-updating-changing-visual-studio-projects-with-powershell/
After installation, my usage was like this in case anyone is interested:
dir -recurse *.csproj | Set-ProjectProperties -OverrideDefaultProperties -CustomConfigurationProperties #{ "CodeAnalysisRuleSet" = ".\SystemStandard.ruleset" }
dir -recurse *.csproj | Set-ProjectProperties -OverrideDefaultProperties -CustomConfigurationProperties #{ "RunCodeAnalysis" = "false" }
dir -recurse *.csproj | Set-ProjectProperties -OverrideDefaultProperties -CustomConfigurationProperties #{ "TreatWarningsAsErrors" = "true" }
To specify a rule set for a project, use the CodeAnalysisRuleSet MSBuild property.
To do that, there are several ways you can customize your build
If I understand the question correctly, this can be done quite easily by following the steps outlined in this blog post.
The Directory.Build.props approach
The guide is written with StyleCop in mind but the same step should work with any analyzer.
Create a file named Directory.Build.props (use this exact casing) along with your .sln file, i.e. at the top-level of your project. Its content should be something like this:
<Project>
<PropertyGroup>
<!-- This part specifies the ruleset file name. Change to something
more appropriate if not using StyleCop. -->
<CodeAnalysisRuleSet>$(SolutionDir)StyleCop.ruleset</CodeAnalysisRuleSet>
</PropertyGroup>
<!-- This part adds StyleCop as a reference in all projects + makes the
top-level stylecop.json file be used by all projects. Skip this
altogether if you are not spefically using StyleCop. -->
<ItemGroup>
<PackageReference Include=”StyleCop.Analyzers” Version=”1.1.1-rc.108" PrivateAssets=”all” />
<AdditionalFiles Include=”$(SolutionDir)stylecop.json” Link=”stylecop.json” />
</ItemGroup>
</Project>
Create a StyleCop.ruleset file with your analyzer configuration.
That's it. The next time you run dotnet build or build your project in Visual Studio (you might have to close/reopen the solution if you have it open), these rules should apply.
For reference, here is the Directory.Build.props file in a project of mine: https://github.com/perlun/perlang/blob/master/Directory.Build.props
If projects in your solution have a hierarchical structure, meaning they reference each other and have a common, most abstract base that is root in the structure similar to this:
Common
├── Worker
└── Persistence
└── API
...then you can reference StyleCop.Analyzers package in your root project and set the value of <PrivateAssets> tag to none:
<ItemGroup>
<PackageReference Include="StyleCop.Analyzers" Version="1.1.118">
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
<PrivateAssets>none</PrivateAssets>
</PackageReference>
</ItemGroup>
all is set by default when you add the StyleCop.Analyzers package reference to your project. What does it mean?
You might be using a dependency purely as a development harness and might not want to expose that to projects that will consume your package. In this scenario, you can use the PrivateAssets metadata to control this behavior. ~ Docs
So, by default StyleCop.Analyzers will only be enabled in the project in which you explicitly reference the package. But in almost all code bases I've been part of, enforcing the StyleCode rules in all projects in solution was desired (almost all, except for projects with auto-generated code e.g. EF migrations). Changing all to none in the package reference metadata will result in passing down the styling rules to all projects that depend on it.
Solution
Summarizing, the root project Common will have to reference StyleCop.Analyzers package while setting <PrivateAssets> to none:
<ItemGroup>
<PackageReference Include="StyleCop.Analyzers" Version="1.1.118">
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
<PrivateAssets>none</PrivateAssets>
</PackageReference>
</ItemGroup>
and Worker or Persistence or any other dependent project will only have to reference the previous layer, which is common practice in any layered architecture anyway:
<ItemGroup>
<ProjectReference Include="..\Common\Common.csproj" />
</ItemGroup>
I'm adding a custom .tt template generation target to my project to run before CoreBuild, and there appear to be 2 ways of doing it:
<Project...>
<Target Name="TransformOnBuild" AfterTargets="BeforeBuild">
</Project>
and
<Project...>
<Target Name="TransformOnBuild" BeforeTargets="CoreBuild">
</Project>
If my target should run before my project is built, as the project relies on it, would it be better for me to use the latter? I've seen the former used to do things like generate text templates, but it seems like an unreliable way to do it because it might get run after CoreBuild, which is too late. Or is there some reason why AfterTargets="BeforeBuild" is still guaranteed to run before the core build?
I've also seen BeforeTargets="BeforeBuild" which will build even earlier. Is this a better place to put a `.tt text generation target?
Building on #stjin's answer, a good solution seems to be using
BeforeTargets="CoreCompile" DependsOnTargets="PrepareForBuild"
which is what the .net sdk (new-style csproj for .net core / standard projects) is doing for automatic AssemblyInfo.cs generation.
It uses the following comment to explain why:
Note that this must run before every invocation of CoreCompile to
ensure that all compiler runs see the generated assembly info. There
is at least one scenario involving Xaml where CoreCompile is invoked
without other potential hooks such as Compile or CoreBuild, etc., so
we hook directly on to CoreCompile. Furthermore, we must run after
PrepareForBuild to ensure that the intermediate directory has been
created.
Note that the "intermediate directory" (obj/[TargetFramework] in this case) is where the output .cs file is placed in this case which may also be what you might want to do.
Update 2
Based on the documentation from Microsoft:
This is the Target Build Order
InitialTargets targets are run.
Targets specified on the command line by the /target switch are run. If you specify no targets on the command line, then the DefaultTargets targets are run. If neither is present, then the first target encountered is run.
The Condition attribute of the target is evaluated. If the Condition attribute is present and evaluates to false, the target isn't executed and has no further effect on the build.
Before a target is executed, its DependsOnTargets targets are run.
Before a target is executed, any target that lists it in a BeforeTargets attribute is run.
Before a target is executed, its Inputs attribute and Outputs attribute are compared. If MSBuild determines that any output files are out of date with respect to the corresponding input file or files, then MSBuild executes the target. Otherwise, MSBuild skips the target.
After a target is executed or skipped, any target that lists it in an AfterTargets attribute is run.
These are the answers to your questions:
If my target should run before my project is built, as the project relies on it, would it be better for me to use the latter?
No, because all the dependencies of CoreBuild will be executed before your template generation target and that will be too late.
Or is there some reason why AfterTargets="BeforeBuild" is still guaranteed to run before the core build?
AfterTargets="BeforeBuild" guarantees that your target will be executed on time because it will be executed before all the CoreBuild dependencies.
I've also seen BeforeTargets="BeforeBuild" which will build even earlier. Is this a better place to put a `.tt text generation target?
In both cases AfterTargets="BeforeBuild" or BeforeTargets="BeforeBuild" your target will be executed before all the CoreBuild dependencies, however in both cases you still have the risk to affect the results of your template generation target depending on what you have to execute in BeforeBuild. If you have this under control, you can use any of these options safely.
Running a target before the CoreBuild?
> there appear to be 2 ways of doing it.
There are more options to achieve this. Please review below.
You should use the specific built in targets (BeforeBuild or AfterBuild) for this purpose. Thisis the mechanism provided by Microsoft to safely extend the build process when using projects that depends on Microsoft.Common.targets
If you have only one target to run before CoreBuild, you can do this:
<Target Name="BeforeBuild">
<!-- add your tasks here -->
</Target>
If you have more than one target to run before CoreBuild, you can define a property with all the targets that needs to be called in the required order of execution:
<PropertyGroup>
<BeforeBuildDependsOn>
CustomTarget1;
CustomTarget2;
CustomTarget3
</BeforeBuildDependsOn>
</PropertyGroup>
<Target Name="BeforeBuild" DependsOnTargets="$(BeforeBuildDependsOn)"/>
UPDATE:
Based on the fragment provided by #stijn:
AfterTargets="BeforeBuild" will insert/execute the custom target like this: (Depends on BeforeBuild
<BuildDependsOn>
BeforeBuild;
|-> Custom Target
CoreBuild;
AfterBuild
</BuildDependsOn>
BeforeTargets="CoreBuild" will insert/execute the custom like this (Depends on CoreBuild):
<BuildDependsOn>
BeforeBuild;
|-> Custom Target
CoreBuild;
AfterBuild
</BuildDependsOn>
So the "template generation target" will be executed in the same place (Between BeforeBuild and CoreBuild, but depending on different targets, that's why the apropriate Target to be used should be BeforeBuild inline or with dependencies.
Now regarding the 3rd party issue comment, the BeforeBuild/AfterBuild targets are intended for final users, the third party providers should implement their scripts without affect the basic workflow. These are some of the options a 3rd party should use to avoid breaking the regular flow:
Considering this as the base:
<PropertyGroup>
<BuildDependsOn>
BeforeBuild;
CoreBuild;
AfterBuild
</BuildDependsOn>
</PropertyGroup>
<Target Name="Build" DependsOnTargets="$(BuildDependsOn)"/>
Option 1: This will inject your custom 3rd party script before BeforeBuild without affect the default BuildDependsOn sequence, that still allows the final user to use the BeforeBuild target without.
<PropertyGroup>
<BuildDependsOn>
MyCustomThirdParty;
$(BuildDependsOn);
</BuildDependsOn>
</PropertyGroup>
<PropertyGroup>
<MyCustomThirdPartyDependsOn>
BeforeMyCustomThirdParty;
CustomStep1;
CustomStep2;
CustomStep1;
AfterMyCustomThirdParty
</MyCustomThirdPartyDependsOn>
</PropertyGroup>
<Target Name="MyCustomThirdParty" DependsOnTargets="$(MyCustomThirdPartyDependsOn)"/>
Option 2: If the 3rd party script needs to be executed after the BeforeBuild target, this can be done like this:
<PropertyGroup>
<BuildDependsOn>
BeforeBuild;
MyCustomThirdParty;
CoreBuild;
AfterBuild
</BuildDependsOn>
</PropertyGroup>
NOTE: In order for this to work properly, you must add the PropertyGroup and targets AFTER the Microsoft.CSharp.targets import.
This way you will be able to use multiple 3rd party scripts respecting the general workflow.
You can obviously use a combination of these options depending on the situation. But you should follow these general rules:
Respect the default workflow (BeforeBuild, CoreBuild, AfterBuild).
Include Before/After targets for your 3rd party script to allow the final user to inject anything before or after the 3rd party script execution.
These should be considered when you are using the default visual studio generated build scripts (Projects like .csproj, .vbproj, etc). If you are implementing your own scripts for other languajes or purposes, you can use BeforeTargets and AfterTargets wherever you want, but why don't you follow the good practices based on the existing scripts?
From Microsoft.Common.CurrentVersion.targets, the Build target is basically:
<BuildDependsOn>
BeforeBuild;
CoreBuild;
AfterBuild
</BuildDependsOn>
<Target Name="Build" DependsOnTargets="$(BuildDependsOn)"/>
<PropertyGroup>
<CoreBuildDependsOn>
PrepareForBuild;
PreBuildEvent;
...
Compile;
...
PostBuildEvent
</CoreBuildDependsOn>
</PropertyGroup>
<Target Name="CoreBuild" DependsOnTargets="$(CoreBuildDependsOn)">
So using BeforeTargets="CoreBuild" will run before CoreBuild indeed, but that is after all it's dependent targets ran, so after all actual build steps. That's usually not what you want, instead if you want to run something before compilation etc, use BeforeTargets="PrepareForBuild" or indeed AfterTargets="BeforeBuild" or even BeforeTargets="BeforeBuild".
this is my situation:
I have VS2010 solution with X projects included.
Wix project that can create msi from all compiled artifacts.
I have build machine \ Jenkins that first compile (MSBuild .Net 4) all the solution, then compile the wix to package it to msi.
What\how can I inject to all artifacts\dlls the number of the product (e.g 11.2.0.4789) - as simple as possible?
Is there and command line arguments that can be passed while compiling the solution?
There are tools, such as several extensions for MSBuild, that do version stamping but each assumes a particular workflow. You might find one that works for you but a DIY method would help you evaluate them, even if it isn't your final solution.
You can add a property to the MSBuild command-line like this:
msbuild /p:VersionStamp=11.2.0.4789
Note: I assume you are going to parameterize the Jenkins build in some way or generate the number during a preceding build step. Here is a simulation of that:
echo 11.2.0.4789 >version.txt
set /p version=reading from pipe <version.txt
msbuild /p:VersionStamp=%version%
Now, the work is in getting each project to use it. That would depend on the project type and where you want VersionStamp to appear.
For a csproj, you might want to use it as the AssemblyVersion. The simplest way is to move the attribute to a cs file by itself and rewrite it every time. I would leave a comment in AssemblyInfo.cs as a clue to where it now comes from. You can include the cs file in your project either dynamically or permanently. I prefer dynamically since it is effectively an intermediate file for the build. So, in your .csproj add the following in a text editor (e.g. Visual Studio. Unload and Edit project):
<Target Name="BeforeBuild">
<PropertyGroup>
<AssemblyVersionPath>$(IntermediateOutputDir)AssemblyVersion.cs</AssemblyVersionPath>
</PropertyGroup>
<ItemGroup>
<Compile Include="$(AssemblyVersionPath)" />
</ItemGroup>
<WriteLinesToFile
File='$(AssemblyVersionPath)'
Overwrite="true"
Condition="'$(ProductVersion)' != ''"
Lines='using System.Reflection%3b;
[assembly: AssemblyVersion("$(VersionStamp)")]' />
</Target>
This is sufficient but a more thorough solution would include adding the file to a list so it is cleaned with other files and only writing the file if the version changed to prevent unnecessary rebuilds, etc.
Use a similar technique for other project types.
I'm trying to write a code generation tool. For this tool it's important that the generated code is available prior to building (i.e., for IntelliSense). I know Visual Studio will at least partially evaluate the project build plan automatically to generate IntelliSense, but I can't find much information on the details.
As a simpler example, let's say I want to take all items with build action None and compile them. I have a project like this:
<Project [...]>
[...]
<Compile Include="Foo.cs" />
<None Include="Bar.cs" />
</Project>
One way to get Bar.cs to compile is to add the following to the project:
<PropertyGroup>
<CoreCompileDependsOn>
$(CoreCompileDependsOn);IndirectCompile
</CoreCompileDependsOn>
</PropertyGroup>
<Target Name="IndirectCompile">
<CreateItem Include="#(None)">
<Output ItemName="Compile" TaskParameter="Include" />
</CreateItem>
</Target>
If I do it this way, Visual Studio acts basically the same as if Bar.cs had the Compile action to begin with. IntelliSense is fully available; if I make a change in Bar.cs it's reflected immediately (well, as immediate as the background operation normally is) in IntelliSense when I'm editing Foo.cs, and so on.
However, say instead of directly compiling the None entry, I want to copy it to the obj directory and then compile it from there. I can do this by changing the IndirectCompile target to this:
<Target Name="IndirectCompile"
Inputs="#(None)"
Outputs="#(None->'$(IntermediateOutputPath)%(FileName).g.cs')"
>
<Copy SourceFiles="#(None)"
DestinationFiles="#(None->'$(IntermediateOutputPath)%(FileName).g.cs')"
>
<Output TaskParameter="DestinationFiles" ItemName="Compile" />
</Copy>
</Target>
Doing this causes IntelliSense to stop updating. The task works on build, dependency analysis and incremental building work, Visual Studio just stops automatically running it when an input file is saved.
So, that leads to the title question: How does Visual Studio choose to run targets or not for IntelliSense? The only official documentation I've found has been this, specifically the "Design-Time IntelliSense" section. I'm pretty sure my code meets all those criteria. What am I missing?
After a few days of experimenting and poking around in the debugger I think I have found the answer, and unfortunately that answer is that this is not possible (at least not in a clearly supported way -- I'm sure there are ways to trick the system).
When a project is loaded, and when the project itself changes (files added/removed, build actions changed, etc), the IntelliSense build is executed (csproj.dll!CLangCompiler::RunIntellisenseBuild). This build will run tasks up to and including the Csc task. Csc will not execute normally, but instead just feed its inputs back into its host (Visual Studio).
From this point on, Visual Studio keeps track of the files that were given as Sources to the Csc task. It will monitor those files for changes, and when they change, update IntelliSense. So in my example, if I manually edit Bar.g.cs those changes will be picked up. But the build tasks themselves will not be run again until the project changes or a build is explicitly requested.
So, that's disappointing, but not surprising, I guess. It also explains something else I had always wondered about -- XAML files with a code-behind tend to have a Custom Tool action of MSBuild:Compile, presumably for exactly this reason.
I'm going to mark this as the answer, but I'd love to be told I'm wrong and that I missed something.
The Microsoft AJAX Minifier provides a build task which can be used in TFS or local build definitions.
I have succsfully used this in both a local project file (building to seperate files) and in TFS build definitions (overwriting the existing JS files).
I'd like to move to using Visual Studio 2010's 1-click publish feature rather than a TFS build definition, however adding the minification task as an AfterBuild target in the project file doesn't appear to effect the 1-click publish feature.
Using information found in this thread and these articles, I tried creating a file named '[ProjectName].wpp.targets in my WAP root directory, and used the following XML:
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="$(MSBuildExtensionsPath)\Microsoft\MicrosoftAjax\ajaxmin.tasks" />
<Target Name="Minify" BeforeTargets="MSDeployPublish">
<ItemGroup>
<JS Include="**\*.js" Exclude="**\*.min.js;**\*vsddoc.js;**\*debug.js" />
</ItemGroup>
<ItemGroup>
<CSS Include="**\*.css" Exclude="**\*.min.css" />
</ItemGroup>
<AjaxMin JsSourceFiles="#(JS)" JsSourceExtensionPattern="\.js$" JsTargetExtension=".min.js" CssSourceFiles="#(CSS)" CssSourceExtensionPattern="\.css$" CssTargetExtension=".min.css" />
</Target>
</Project>
This doesn't appear to have any effect, and unfortunately Visual Studio doesn't give much in the way of feedback or debugging info for these tools.
Has anyone had any success minifying JavaScript / CSS using the Visual Studio 2010 1-click publish feature?
I just wrote a detailed blog entry on how to do this at
http://sedodream.com/2011/02/25/HowToCompressCSSJavaScriptBeforePublishpackage.aspx and http://blogs.msdn.com/b/webdevtools/archive/2011/02/24/how-to-compress-css-javascript-before-publish-package.aspx.
Here are the contents
Today I saw a post on stackoverflow.com asking Using Microsoft AJAX Minifier with Visual Studio 2010 1-click publish. This is a response to that question. The Web Publishing Pipeline is pretty extensive so it is easy for us to hook in to it in order to perform operation such as these. One of those extension points, as we’ve blogged about before, is creating a .wpp.targets file. If you create a file in the same directory of your project with the name {ProjectName}.wpp.targets then that file will automatically be imported and included in the build/publish process. This makes it easy to edit your build/publish process without always having to edit the project file itself. I will use this technique to demonstrate how to compress the CSS & JavaScript files a project contains before it is published/packaged.
Eventhough the question specifically states Microsoft AJAX Minifier I decided to use the compressor contained in Packer.NET (link in resources section). I did this because when I looked at the MSBuild task for the AJAX Minifier it didn’t look like I could control the output location of the compressed files. Instead it would simply write to the same folder with an extension like .min.cs or .min.js. In any case, when you publish/package your Web Application Project (WAP) the files are copied to a temporary location before the publish/package occurs. The default value for this location is obj{Configuration}\Package\PackageTmp\ where {Configuration} is the build configuration that you are currently using for your WAP. So what we need to do is to allow the WPP to copy all the files to that location and then after that we can compress the CSS and JavaScript that goes in that folder. The target which copies the files to that location is CopyAllFilesToSingleFolderForPackage. (To learn more about these targets take a look at the file %Program Files (x86)%\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets.) To make our target run after this target we can use the MSBuild AfterTargets attribute. The project that I created to demonstrate this is called CompressBeforePublish, because of that I create a new file named CompressBeforePublish.wpp.targets to contain my changes.
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<UsingTask TaskName="SmallSharpTools.Packer.MSBuild.Packer"
AssemblyFile="$(MSBuildThisFileDirectory)..\Contrib\SmallSharpTools.Packer\SmallSharpTools.Packer.dll" />
<!-- This target will run after the files are copied to PackageTmp folder -->
<Target Name="CompressJsAndCss" AfterTargets="CopyAllFilesToSingleFolderForPackage">
<!-- Discover files to compress -->
<ItemGroup>
<_JavaScriptFiles Include="$(_PackageTempDir)\Scripts\**\*.js" />
<_CssFiles Include="$(_PackageTempDir)\Content\**\*.css" />
</ItemGroup>
<Message Text="Compressing JavaScript files" Importance="high" />
<!--
Compress the JavaScript files.
Not the usage of %(JavaScript.Identity which causes this task to run once per
.js file in the JavaScriptFiles item list.
For more info on batching: http://sedotech.com/resources#Batching
-->
<Packer InputFiles="%(_JavaScriptFiles.Identity)"
OutputFileName="#(_JavaScriptFiles->'$(_PackageTempDir)\Scripts\%(RecursiveDir)%(Filename)%(Extension)')"
Mode="JSMin"
Verbose="false"
Condition=" '#(_JavaScriptFiles)' != ''" />
<Message Text="Compressing CSS files" Importance="high" />
<Packer InputFiles="%(_CssFiles.Identity)"
OutputFileName="#(_CssFiles->'$(_PackageTempDir)\Content\%(RecursiveDir)%(Filename)%(Extension)')"
Mode="CSSMin"
Verbose="false"
Condition=" '#(_CssFiles)' != '' "/>
</Target>
</Project>
Here I’ve created one target, CompressJsAndCss, and I have included AfterTargets=”CopyAllFilesToSingleFolderForPackage” which causes it to be executed after CopyAllFilesToSingleFolderForPackage. Inside this target I do two things, gather the files which need to be compressed and then I compress them.
1. Gather files to be compressed
<ItemGroup>
<_JavaScriptFiles Include="$(_PackageTempDir)\Scripts\**\*.js" />
<_CssFiles Include="$(_PackageTempDir)\Content\**\*.css" />
</ItemGroup>
Here I use an item list for both JavaScript files as well as CSS files. Notice that I am using the _PackageTempDir property to pickup .js & .css files inside the temporary folder where the files are written to be packaged. The reason that I’m doing that instead of picking up source files is because my build may be outputting other .js & .css files and which are going to be published. Note: since the property _PackageTempDir starts with an underscore it is not guaranteed to behave (or even exist) in future versions.
2. Compress files
I use the Packer task to compress the .js and .css files. For both sets of files the usage is pretty similar so I will only look at the first usage.
<Packer InputFiles="%(_JavaScriptFiles.Identity)"
OutputFileName="#(_JavaScriptFiles->'$(_PackageTempDir)\Scripts\%(RecursiveDir)%(Filename)%(Extension)')"
Mode="JSMin"
Verbose="false"
Condition=" '#(_JavaScriptFiles)' != ''" />
Here the task is fed all the .js files for compression. Take a note how I passed the files into the task using, %(_JavaScriptFiles.Identity), in this case what that does is to cause this task to be executed once per .js file. The %(abc.def) syntax invokes batching, if you are not familiar with batching please see below. For the value of the output file I use the _PackageTempDir property again. In this case since the item already resides there I could have simplified that to be #(_JavaScriptFiles->’%(FullPath)’) but I thought you might find that expression helpful in the case that you are compressing files which do not already exist in the _PackageTempDir folder.
Now that we have added this target to the .wpp.targets file we can publish/package our web project and it the contained .js & .css files will be compressed. Note: Whenever you modify the .wpp.targets file you will have to unload/reload the web project so that the changes are picked up, Visual Studio caches your projects.
In the image below you can see the difference that compressing these files made.
You can download the entire project below, as well as take a look at some other resources that I have that you might be interested in.
Resources
http://sedotech.com/content/samples/CompressBeforePublish.zip
http://sedotech.com/resources#batching
MSBuild BeforeTargets/AfterTargets
WebDeploymentToolMSDeployHowToExcludeFilesFromPackageBasedOnConfiguration.aspx
Packer.NET
For this to work in visual studio 2015, we have to change the "AfterTarget" from
<Target Name="CompressJsAndCss" AfterTargets="CopyAllFilesToSingleFolderForPackage">
to the following
<Target Name="CompressJsAndCss" AfterTargets="PipelineCopyAllFilesToOneFolderForMsdeploy">
enjoy!!