Remote Swift package incomplete when using versioning - xcode

I am in the process of modularising my app using Swift Packages and some of these modules will be useful for future apps so I have created standalone Packages and added them to the main project as dependencies.
As recommended, I am using semantic versioning and have tagged the commit as per the Apple documentation (see here) and pushed to my Github account.
When I attempt to add this package to my app using version rules ("up to next major version") the dependency does not seem to be resolved properly; the code cached locally only contains a bare package without any of the code I subsequently added.
If I remove version rules and use commit rules instead, I get the whole package as expected but that may cause problems with compatibility in the future as the package evolves.
Checking the version control in Xcode and directly on Github shows that the version tag 1.0.0 is present and associated with the last commit which is puzzling. The steps taken in an attempt to resolve the issue are as follows:
force push package (including tag) to GitHub
remove version control from package by deleting .git file and adding version control, committing, tagging then pushing to remote
reset package caches
clear derived data
close and restart XCode
None of the above work and I thought initially that I was not doing things in the correct order. However I'm particularly puzzled that removing version control then setting it back up again (with the code codebase in a complete state at this time) didn't work as the bare-bones template package was never pushed to the remote in the first place. I don't understand how can Xcode be still retrieving it. IMO this suggests there's a cache somewhere it's falling back to (despite being told to clear the cache) but I stand to be corrected.
Screenshots of the remote (showing the correct tag) and states of the Xcode package dependency using branch rules and version rules attached.

Related

Prevent resolving cyclic dependency to local workspace

I have a package, which offers basic utilities. It has a dev-dependency on a tool, which helps build it. That tool in turn needs features from the package.
The problem is, that during development of the package, the dependency of itself gets resolved to the local workspace, not fetching it from a registry - which obviously won't work, for two reasons:
it isn't built
the version isn't bumped yet, but there may already be breaking changes
Personally, i don't know why this behavior is desirable in the first place, but how do i disable it, forcing a resolution to the registry, fetching the current latest published version?
Using yarn 3.2.0 with pnp (could not see any related changes towards 3.2.1, therefore imho irrelevant)
The behavior can be disabled via enableTransparentWorkspaces: false in yarnrc.yml.
After additional research, i found yarn-2-berry-npm-protocol-switches-to-workspace-resolution, which links to a related github Q&A, where the answer is given. Sadly, it never got accepted on github, nor propagated to the related SO question, so i'll keep this one here.

How to version & publish snapshot/not-finished work on a go module?

I'm new in Golang, and I'm trying to develop a go-module and share it with my colleagues while I'm developing it; In JVM/sbt I used to publish my work with a 'SNAPSHOT' postfixed to version value. but How I can achieve the same in with go-modules?
Versions for modules are tagged by using repo tags (e.g. git tag), following semantic versioning (https://semver.org/).
So, any version starting with v0 is treated as unstable and may make breaking changes at any time. Once you release a v1, you cannot make any breaking changes without bumping your major version, which also means you change your module name.
You also have the option of appending +foo to the end of your version to indicate additional information about the version.
I wrote https://blog.golang.org/using-go-modules as an overview of how to get started using modules.

Refactored a package to be camel case in perforce, old package not deleted and has duplicated within the repository

I have a package which had been incorrectly named, all lowercase but should have been camelCase. I used IntelliJ to refactor it, tested it still worked and pushed this to the perforce repository.
Perforce marked the classes in the old lowercase package as updates, not deletions and added in the new package too, so my perforce looks like this for all the classes in that package:
//...perforce repo..../src/main/java/thepackage/MyClass.java
//...perforce repo..../src/main/java/thePackage/MyClass.java
When Jenkins tried to build, it gets compilation issues stating each class is a duplicate.
Windows isn't case sensitive with folders, and so is unable to checkout both file structures and only got the newer package, so I couldn't mark the old file for deletion. Marking the change for a revert or rollback also threw errors as the classes in the old package were not available locally.
I resolved this by doing the following:
Without checking them out in perforce, delete all the new classes that are being flagged as duplicates (or just delete that new camelCase package)
Use perforce windows client to "Reconcile Offline Work..."
Here's the odd bit, in the popup you'll have both the old and new package versions (and identical classes), being marked for deletion. Uncheck all the classes in the camelCase repository and click ok.
Now your pending changelist will have all the classes from the old package. Submit this.
Pull the latest version of your perforce repo to re-download the classes you just deleted, this will restore the classes in the camelCase package.

Managing multiple versions of internal (private) NuGet packages

Our development team has been fairly small and, until now, all working on a single Visual Studio 2012 solution. We are growing and wanting to create better separation with multiple solutions for different project teams.
However, there are occasions where the code in one solution will want to utilize code from another. We have decided using internal (i.e. private) NuGet packages will be a good way to manage these dependencies.
However, the question has come up on how to deal with multiple versions of the same package that are in different SDLC stages (e.g. Development, QA, Staging, Production, etc.)
Example: If we have these three solutions...
CoreStuff
CoolProject1
CoolProject2
If working in CoolProject1, and we need to utilize code from CoreStuff, we can add the NuGet package. Presumably this package will be the latest Production (stable) version of CoreStuff.
However, what if a developer working on CoolProject2 is aware of some changes in CoreStuff that are currently in Development and wants to utilize that version?
Not sure if the best approach is to create separate packages for each (seems to require changing your package references back and forth depending on what stage the solution is in) or somehow utilize multiple versions of the same package (not sure if that's easy to manage with NuGet).
Anyone tackle something like this?
The first thing to remember is that NuGet will not automatically update your package references, so if you have already 'linked' your solution to the latest stable package of CoreStuff (say 1.2.2) then there won't be any problems if a newer (unstable) version is provided (assuming that the package you're using doesn't disappear from the package repository). Obviously if you upgrade your package reference then you will get the unstable package.
So the simplest solution is to make sure that you 'link' your project to the stable package by getting it via the NuGet package manager before the other package is released. While the UI only allows you to get the latest version, the Package Manager Console can get any version of a package so you could use that to explicitly provide the version number, e.g.:
Install-Package CoreStuff -Version 1.2.2 -Project CoolProject1
If that is not a solution then there are several other options to tackle this problem:
Give the development version a different semantic version that indicates it is a unstable version, e.g. 1.2.3-alpha. In this case CoolProject1 could pull in package CoreStuff.1.2.2 (which should be latest stable version in your repository) and CoolProject2 could pull in CoreStuff.1.2.3-alpha (which would be the latest unstable version).
Have multiple repositories, e.g. one for stable (released) packages and one for unstable (development) versions. Then you can select your packages from the desired repositories. If you wanted to you could make it so that only your release process can push packages up to the stable repository and your CI build pushes up to the unstable one (so that you always have the latest packages available)
If the developer of CoolProject2 just wants to develop against the latest version (but will wait to release CoolProject2 until after CoreStuff v.next has been released) then he could potentially create a local package repository (i.e. a directory on his drive) and put the new package of core stuff there. That way other developers won't even see the package.
The most important thing will be to make sure that you don't get CoreStuff.1.2.2 and CoreStuff.v-next in the same repository if CoreStuff.v-next simply has a higher version number, because in that case the NuGet UI won't let you pick v1.2.2 (but the Package Manager Console does!).
If you would want to switch from one package type to another you'd have to do a manual update (which you always have to do when changing to the next package version anyway), but that's not a bad thing given that this forces a developer to at least check that the update of the package doesn't break anything.

What is the most effective way to lock down external dependency "versions" in Golang?

By default, Go pulls imported dependencies by grabbing the latest version in master (github) or default (mercurial) if it cannot find the dependency on your GOPATH. And while this workflow is quite simple to grasp, it has become somewhat difficult to tightly control. Because all software change incurs some risk, I'd like to reduce the risk of this potential change in a manageable and repeatable way and avoid inadvertently picking up changes of a dependency, especially when running clean builds via CI server or preparing to deploy.
What is the most effective way I can pin (i.e. lock down or capture) a package dependency so I don't find myself unable to reproduce an old package, or even worse, unexpectedly broken when I'm about to release?
---- Update ----
Additional info on the Current State of Go Packaging. While I ended up (as of 7.20.13) capturing dependencies in a 3rd party folder and managing updates (ala Camlistore), I'm still looking for a better way...
Here is a great list of options.
Also, be sure to see the go 1.5 vendor/ experiment to learn about how go might deal with the problem in future versions.
You might find the way Camlistore does it interesting.
See the third party directory and in particular the update.pl and rewrite-imports.sh script. These scripts update the external repositories, change imports if necessary and make sure that a static version of external repositories is checked in with the rest of the camlistore code.
This means that camlistore has a completely repeatable build as it is self contained, but the third party components can be updated under the control of the camlistore developers.
There is a project to help you in managing your dependencies. Check gopack
godep
I started using godep early last year (2014) and have been very happy with it (it met the concerns I mentioned in my original question). I am no longer using custom scripts to manage the vendoring of dependencies as godep just takes care of it. It has been excellent for ensuring that no drift is introduced regardless of timing or a machine's package state. It works with the existing mechanism of go get and introduces the ability to pin (godep save) and restore (godep restore) based on Godeps/godeps.json.
Check it out:
https://github.com/tools/godep
There is no built in tooling for this in go. However you can fork the dependencies yourself either on local disk or in a cloud service and only merge in upstream changes once you've vetted them.
The 3rd party repositories are completely under your control. 'go get' clones tip, you're right, but you're free to checkout any revision of the cloned-by-go-get or cloned-by-you repository. As long as you don't do 'go get -u', nothing touches your 3rd party repositories already sitting at your hard disk.
Effectively, your external, locally cloned, dependencies are always locked down by default.

Resources