SSIS: Package password error when building - visual-studio

We have several hundred SSIS packages in a Visual Studio Integration Services project. When this was originally set up it was configured to encrypt sensitive data in the packages with the user key. This caused some issues for us when the project file was checked out and we had conflicts because, of course, our different developer user keys were different.
We just attempted to change to sensitive data with password. To do that we had to update the project property and then we had to do it for every package manually (I tried looping using dtutil.exe but for some reason it did not work). To build my project I had to open every single package, change the password, and then build the project. After a few hours of this and getting every package updated and saved I was able to build and deploy my packages.
After that I did a commit/push to source control (Azure Git) and when my co-worker did a pull and opened the project they are now unable to build with the same error. If he puts the password in and checks everything in and I pull it back down, I get the error again.
The package and project passwords match, I can build, but when it's pulled down we get the error.
The error is:
"Project consistency check failed. The following inconsistencies were detected:
[package Name] has a different password than the project"

I was able to get around this issue but I was not able to figure out exactly what is occurring. I basically changed my protection level to DontSaveSensitive and made sure all of my passwords and sensitive information were parameterized and passed in using SSIS environment variables.
So after making sure all sensitive data was not saved in the packages I changed the project protection level and the protection level of every package. I changed the packages using this code:
for %f IN (*.dtsx) DO dtutil.exe /file %f /encrypt file;%f;0 /quiet
This changed the setting for the packages but I still received the error when building. I had to open each package, change the protection level property to any other value then back to DontSaveSensitive and then build. Once I did that the items fell out of the error message. After doing that manually for 650+ packages I was able to build. The most important resolution was that once I did a push and my co-worker pulled the changes down, they did not have to edit each package. When I was using encrypt sensitive with password we could not stop it from requiring a change and build on every package.
This is still a bit of a mystery, why dtutil.exe would not just change them without needing to re-build is very frustrating. But this work around ultimately got us past the problem and parameterizing was probably the best practice anyway.

I just had the same issue, my problem was that I was missing to introduce the password in the properties of every package (not only in the project), after doing that I have been able to rebuild the project.

Related

Sonar Strange Encoding Issue?

I recently rolled out an update for Jenkins to kick off sonar-scanner on version 5.6 of SonarQube. I'm not using the plugin, just a command line call of the sonar scanner from the directory where the sonar-project.properties file resides.
So far all of the developers have followed the same steps, and configure the properties file for their services and works great except in a few cases. Two developers have had a strange issue, when an error message prompts:
"Caused by: Not authorized. Analyzing this project requires to be authenticated. Please provide the values of the properties sonar.login and sonar.password."
I thought this to be strange because the other developers would probably have the same issue if the authentication token I used in the instructions was wrong. I compared a working copy with the version the first developer and the only difference was the project specific things such as DLL name, version, etc... I'll provide a template below. With the file looking fine, I saved off the broken copy, and copied the contents of another working copy into the broken copy. I then changed the project specific properties, and commit into subversion. Sonar scans successfully!
Out of curiosity, I then compared the old broken file and the new working copy line by line. Their was absolutely no difference between any character. I then thought this must be an encoding issue. I did a quick test by adding the sonar encoding property, commit this back and the scan failed. So I then changed back to the working copy and just continued.
The next day a second developer came to me with the same exact issue. I then tried the same previous steps where I copied the contents of a working copy, and pasted into the new, and commit this back in. However this time the workaround did not work. In fact, I tried about 5 different working copies to paste into and they all failed with that authorization error. I know the properties file is exactly correct with the token and such.
I'm not sure what to do at this point, I haven't come across any logs on the server that indicate any good information to me unless their is a log I'm unaware of.
# Token
sonar.login=SOMESECRETTOKEN
# Unique project key for sonar
sonar.projectKey=SOMESERVICE
# UI Settings for sonar
sonar.projectName=SOMESERVICE
sonar.projectVersion=SOMEVERSION
# Path to source, if not set it searches from this
# file's directory
sonar.sources=.
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
#Cop
sonar.stylecop.projectFilePath=./SOMEPROJ.csproj
sonar.cs.fxcop.assembly=./bin/Release/SOMEDLL.dll
sonar.cs.fxcop.fxCopCmdPath=C:/Program Files (x86)/Microsoft Fxcop 10.0/FxCopCmd.exe
sonar.fxcop.assemblies=./bin/Release/SOMEDLL.dll
Any helps or pointers is appreciated, thanks!
This isn't about your encoding or file contents, but about permissions. The user that runs the scan doesn't have Execute Analysis permissions on the projects in question.
And to create new projects with the first analysis, the user must also have the Create Projects permission.
When encountering this issue, I loaded the file in Notepad++ which told me the file was saved under some strange encoding visual studio gave text files. I fixed it by switched the encoding to UTF-8 which resolved the problem. This probably should be handled better in Sonar!

Getting 'no such module RealmSwift' error after taking a checkout of the code

I am using RealmSwift in my project. I followed all the instructions while setting up Realm for my project, like dragging the frameworks into the embedded binaries section, setting up the framework search path and including the required Run script in Build Settings. The project works fine after that. Then while committing the changes, I committed the header files and bcsymbol files etc of the included Realm frameworks.
After that, I took a checkout of my project. After the checkout, on opening the project, I am getting this error: 'No such module RealmSwift'.
I tried deleting the frameworks and adding them again, and cleaning the project. The project just won't compile. It keeps giving the same error. What am I doing wrong?
Hmm, there's no real good answer for solving this sort of problem, as it can happen for a variety of reasons.
More often than not, like in this SO question, it can be caused by the Framework Header Search not being set up correctly, and so the project isn't able to see the framework correctly.
If worse comes to worse, make sure to absolutely delete every reference of RealmSwift in your project (Including in the build settings) and try installing it from scratch again. Good luck!

ClickOnce Error "different computed hash than specified in manifest" when transferring published files

I am in an interesting situation where I maintain the code for a program that is used and distributed primarily by our sister company. We are ready to distribute the program to all of the 3rd party users and since it is technically our sister companies program, we want to host it on their website. (in the interest of anonimity, I'll use 'program' everywhere instead of the actual application name, and 'www.SisterCompany.com' instead of their actual URL.)
So I get everything ready to go, setup the Publish setting to check for updates at program start, the minimum required version, and I set the Insallation Folder URL and Update Location to "http://www.SisterCompany.com/apps/program/", with the actual Publishing Folder Location as "C:\LocalProjects\Program\Publish\". Everything else is pretty standard.
After publish, I confirm that everything installs and works correctly when running directly from the publish location on my C: drive. So I put everything on our FTP server, and the guy at our sister company pulls it down and places everything in the '/apps/program/' directory on their webserver.
This is where it goes bad. When I try to install it from their site, I get the - File, Program.exe.config, has a different computed hash than specified in manifest. Error. I tested it a bit, and I even get that error trying to install from any network location on our network other than my local C: drive.
After doing the initial publish in visual studio, I have changed no files (which is the answer/reason I've found by doing some searching about this error).
What could be causing this? Is it because I set the Installation Folder URL to a location that it isn't initially published too?
Let me know if any additional info is needed.
Thanks.
After bashing my head against this all weekend, I have finally found the answer. After unsigning the project and removing the hash on the offending file (an xml file), I got the program to install, but it was giving me 'Windows Side by Side' Errors. I drilled down into the App Cache were the file was, and instead of a config .xml file, it was one of the HTML files from the website the clickonce installer was hosted on. Turns out that the web server didn't seem to like serving up an .XML (or .mdb it turns out) file.
This MSDN article ended up giving me the final solution:
I had to make sure that the 'Use ".deploy" file extension' was selected so that the web server wouldn't mangle files with extensions it didn't like.
I couldn't figure out why that one file's hash would be different. Turns out it wasn't even the same file at all.
It is possible that one of the FTP transfers is happening in text mode, rather than binary?
For me the problem was that .config transformations were done after generating manifest.
To anyone else who's still having trouble, five years later:
The first problem was configuring the MIME type, which on nginx (/etc/nginx/mime.types) should look like this:
application/x-ms-manifest application
See Click Once Server and Client Configuration.
The weirder problem to me was that I was using git to handle the push to the server, i.e.
git remote add live ssh://user#mybox/path/to/publish
git commit -am "committing...";git push live master
Works great for most things, but it was probably being registered as a "change," which prevented the app from installing locally. Once I started using scp instead:
scp -r * user#mybox/path/to/dir/
It worked without a hitch.
It is unfortunate that there is not a lot of helpful information out there about this.

Logging into TFS on a Mac

I got Team Explorer Everywhere so we can use TFS on the Mac Mini we got to test Iphone apps. Since we're using XCode for phonegap, we need to use the commandline program and it is giving me a lot of grief.
What I've done so far (Listing out for anyone who stumbles on this so they can use it):
-Downloaded the trial (free)
-Set the path using PATH=$PATH\:/FOLDERLOCATION
-Accepted EULA and got trial product key... for command line program (tf eula/tf productkey -trial)
-Set up workspace:
tf workspace -new WORKSPACENAME -server:http://SERVERNAME:PORT/FILEPATH -comment:"WORKSPACENAME" && prompted for username -> domain -> password
-Trying to setup the folder path (Fixed):
tf workfold -map SERVERFOLDERPATH LOCALFOLDERPATH -collection:http://SERVERNAME:PORT/FILEPATH -workspace:WORKSPACENAME && prompted for username -> domain -> password
-Make sure I can check out/check in (On hold):...
The error I'm getting right now is "An argument error occurred: First free argument must be a server path." This is what I've been following ever since I got the path set, but I think the versions are different because mine doesn't seem to be set up the same. Any help at all would be appreciated, and I'll keep up with the post as I figure parts out because there doesn't seem to be much online that I can find on TFS on macs.
Update: As normal, I'm an idiot. Have to put the options at the end of the command and have to have the serverfolder path as the first thing after -map. Now I just need to figure out how to use the damn thing. I'll post any other questions I have and try to get all the correct commands up for the selfish reason of having them somewhere in case I forget them later.
Update 2: The mapping hasn't worked out as well as I'd hoped, it seems a combination of my unfamiliarity with Unix/Mac file systems and some settings being missing is keeping me from using 'tf get' to load all of the test data I was trying to get. I'm planning on trying again after I get the location of where my boss wants the data saved and after I can look into something that would save the workspace so it won't say that it can't find the map path every time...
It looks like you're setting up your workspace and some working folder mappings just fine, after the edit. If you're having problems doing a tf get after this, then there are some common problems that might be occurring. TFS workspaces can be a little bit opaque and having a better understanding of them can sometimes help you understand where the problem is:
Team Foundation Server requires a workspace to be configured before you can get files out of source control, edit them or check them back in. A workspace basically simply contains working folder mappings that map your local path(s) to server path(s).
Workspaces are stored on the server and are uniquely identified by your computer's hostname, your username and the workspace's name. A cache of this information for the local host is saved on the client. This implies:
If you remove a workspace on the server, your workstation will be unable to connect.
If you remove the cache, your local computer will not be able to identify the workspace based on working folder mappings until the cache is rebuilt (which happens every time you connect to the server.)
If you change your username or local workstation's name, you cannot access those workspaces.
(Note that very early versions of the Teamprise command line client had certain issues on Mac OS that made identifying the local workstation name difficult. This is fixed, however, in Team Explorer Everywhere.)
Because you can have multiple workspaces for a single server on a single workstation, you can't always simply provide server paths to tf commands, since server paths are ambiguous. ($/ exists in every workspace, for example.) So the command line client resolves paths based on the current working directory and/or the arguments provided. Meaning that you can run tf get foo.txt if you're in a working folder, or you can run tf get /tmp/foo.txt if /tmp is mapped.
One more point - the configuration data for Team Explorer Everywhere is shared between the TFS plug-in for Eclipse and the command line client. So if you're more comfortable using a GUI to set up your workspace(s), you can do that and then use the CLC as you see fit. You don't need to be a Java programmer to use Eclipse - simply download Eclipse and install the TFS plug-in for Eclipse into it, and select Window > Open Perspective > Team Foundation Server Exploring. After that, you'll have the full GUI Team Explorer experience and this perspective will be restored when you open Eclipse, so you won't even need to worry about the Java IDE bits if you don't want to.

I've been asked to deploy, but I cant make the magic happen

I've added a couple of lines to a file, let's say it builds to be foo.dll. It's part of more then one dll file, but it's the core dll. What I did was that I added a couple of lines so it should add some log data to the database. It should not affect any other files what so ever.
So i tried to deploy it. We don't have the magical one click deploy, we are just copying the right files to the right place.
So now, since i have a change in foo.dll, i thought for myself that i just could copy foo.dll and the server would be happy.
I was wrong. Browsing the website i now get "Generic Errors", dont know what that is. I've also tried to copy all the new dll files (4 i total) but that did not solve the problem either.
The error it gives are
Http Error 404.0 not found
Module: ISS Web Core
Notification: MapRequestHandler
Handler: Static File
Error Code: 0x80070002
replacing the new foo.dll with the old one solves the problem. and i've tried to restart the webserver. :-(
I asume you have "published" and not just "compiled" your Web?
You also need to take care off the "Solution Configurations": Debug and Release.
In a normal publish process you would change the configuration to release and publish your project into another folder.
After you have done that you just need to collect the desired files and upload them.
Keep in mind that you need the newest version of you web project. Maybe there are some changes online that your local project hasn'T. This would cause such problems.
We don't have the magical one click deploy
Why not? It's not magic, and it's pretty easy to set up. Get any continuous integration software (I would recommend BuildMaster since I am a developer for it and it's free now) and you'll never have this problem again.

Resources