Xcode Continuous Integration bot fails after Git force push - xcode

We have an Xcode CI bot setup to poll our git repository for new commits and build accordingly. Generally it runs fine. However after forced pushes (I know) the bot will fail and never build correctly again. The solution has been to delete the bot and start over (and admonish ourselves for force-pushing).
Within Xcode build logs there is no error, hitting the console we can only confirm what the problem is (see the last log)...
Jan 28 08:15:51 macmini.local xcsbuildd[80853]: [CSBotSCMAction gitCloneRepositoryAtURL:branch:destinationPath:createDirectoryNamed:completionBlock:] : https://github.com/XXXXXXXX/iOS.git
Jan 28 08:15:51 macmini.local xcsbuildd[80853]: newRepoURL: https://github.com/XXXXXXXX/iOS.git
"https:\/\/githubuserformacmini#github.com\/XXXXXXXX\/iOS.git",
"https_github_com_XXXXXXXX_iOS_git"
"launchCommand" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git clone https:\/\/githubuserformacmini#github.com\/XXXXXXXX\/iOS.git --recursive --verbose --progress https_github_com_XXXXXXXX_iOS_git",
"launchPath" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git",
"GIT_ASKPASS" : "\/Applications\/Server.app\/Contents\/ServerRoot\/usr\/libexec\/xcs_ssh_auth_agent",
Jan 28 08:18:29 macmini.local xcsbuildd[80853]: Obtaining the HEAD hash at: /Library/Server/Xcode/Data/BotRuns/BotRun-187dbc1a-dae2-4ddc-a75b-75831de7ff09.bundle/tmp/https_github_com_XXXXXXXX_iOS_git
Jan 28 08:18:29 macmini.local xcsbuildd[80853]: [CSBotSCMAction gitHeadHashesRepositoryAtPath:branch:completionBlock:] : /Library/Server/Xcode/Data/BotRuns/BotRun-187dbc1a-dae2-4ddc-a75b-75831de7ff09.bundle/tmp/https_github_com_XXXXXXXX_iOS_git
Jan 28 08:18:29 macmini.local xcsbuildd[80853]: newRepoURL: file:///Library/Server/Xcode/Data/BotRuns/BotRun-187dbc1a-dae2-4ddc-a75b-75831de7ff09.bundle/tmp/https_github_com_XXXXXXXX_iOS_git/
"launchCommand" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git show-ref --heads",
"launchPath" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git",
"currentDirectoryPath" : "\/Library\/Server\/Xcode\/Data\/BotRuns\/BotRun-187dbc1a-dae2-4ddc-a75b-75831de7ff09.bundle\/tmp\/https_github_com_XXXXXXXX_iOS_git",
"launchCommand" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git checkout release",
"launchPath" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git",
"GIT_ASKPASS" : "\/Applications\/Server.app\/Contents\/ServerRoot\/usr\/libexec\/xcs_ssh_auth_agent",
"currentDirectoryPath" : "\/Library\/Server\/Xcode\/Data\/BotRuns\/BotRun-187dbc1a-dae2-4ddc-a75b-75831de7ff09.bundle\/tmp\/https_github_com_XXXXXXXX_iOS_git",
Jan 28 08:18:29 macmini.local xcsbuildd[80853]: [CSBotSCMAction gitHeadHashesRepositoryAtPath:branch:completionBlock:] : https://github.com/XXXXXXXX/iOS.git
Jan 28 08:18:29 macmini.local xcsbuildd[80853]: newRepoURL: https://github.com/XXXXXXXX/iOS.git
"https:\/\/githubuserformacmini#github.com\/XXXXXXXX\/iOS.git"
"launchCommand" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git ls-remote --heads https:\/\/githubuserformacmini#github.com\/XXXXXXXX\/iOS.git",
"launchPath" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git",
"GIT_ASKPASS" : "\/Applications\/Server.app\/Contents\/ServerRoot\/usr\/libexec\/xcs_ssh_auth_agent",
Jan 28 08:18:30 macmini.local xcsbuildd[80853]: [CSBotSCMAction gitCommitSummaryForRepositoryURL:betweenHashIdentifier:andHashIdentifier:completionBlock:] : /Library/Server/Xcode/Data/BotRuns/BotRun-187dbc1a-dae2-4ddc-a75b-75831de7ff09.bundle/tmp/https_github_com_XXXXXXXX_iOS_git
"launchCommand" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git log --no-color --name-status --format=fuller --date=iso e478616e4b3915846f7938fec24e8dc12cdae52a..f2c1b24a6b801ed9f7e60dce60add1851618da64",
"launchPath" : "\/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/git",
"GIT_ASKPASS" : "\/Applications\/Server.app\/Contents\/ServerRoot\/usr\/libexec\/xcs_ssh_auth_agent",
"currentDirectoryPath" : "\/Library\/Server\/Xcode\/Data\/BotRuns\/BotRun-187dbc1a-dae2-4ddc-a75b-75831de7ff09.bundle\/tmp\/https_github_com_XXXXXXXX_iOS_git",
Jan 28 08:18:30 macmini.local xcsbuildd[80853]: [XCSCheckoutOperation.m:1033 7d02a310 +168ms] Error getting Git commit log in range e478616e4b3915846f7938fec24e8dc12cdae52a:f2c1b24a6b801ed9f7e60dce60add1851618da64 <stderr>= fatal: Invalid revision range e478616e4b3915846f7938fec24e8dc12cdae52a..f2c1b24a6b801ed9f7e60dce60add1851618da64
Jan 28 08:18:30 macmini.local xcsbuildd[80853]: [XCSCheckoutOperation.m:610 7d02a310 +0ms] Failed to get Git commit history for repo with error Error Domain=CSBotSCMAction Code=-1000 "fatal: Invalid revision range e478616e4b3915846f7938fec24e8dc12cdae52a..f2c1b24a6b801ed9f7e60dce60add1851618da64
I can see...
git log --no-color --name-status --format=fuller --date=iso e478616e4b3915846f7938fec24e8dc12cdae52a..f2c1b24a6b801ed9f7e60dce60add1851618da64
...coming from the integration. e478616e4b3915846f7938fec24e8dc12cdae52a is the commit that was removed (I assume) when I forced in f2c1b24a6b801ed9f7e60dce60add1851618da64. I don't know where the bot is keeping around that information. I tried deleting everything out of /Library/Server/Xcode/Data/BotRuns and that didn't work. I figured maybe it was pulling the last hash from /Library/Server/Xcode/Data/BotRuns/Latest, nope. I also dug around a ton of the other directories in /Library/Server/Xcode without seeing anything.
It would be nice if Xcode's CI approach gave us any control over the git workflow. There's a ludicrously few amount of configuration options. Maybe delete is the only way to go.

It seems like someone removed a commit that was already pushed to a remote. Most CI servers detect changes in the repository with this basic workflow:
Fetch from remote
Do a git log or diff where the starting commit is the current HEAD of a cloned repository, and the end commit is the current HEAD of the remote branch the cloned repo is tracking
If the HEAD of the current checked out branch is removed in the remote repository, Git will probably fail.
You might be able to go to the directory where the CI build has the Git repository checked out and do:
git reset --hard origin/branch_name
This will bring your local and remote branches back in to parity. Then you'll probably need to kick off a manual build.
The real answer is to not alter Git's commit history when those commits have already been pushed to a remote.

Related

files don't exist yet "this exceeds GitHub's file size limit of 100.00 MB" [duplicate]

This question already has answers here:
How to remove file from Git history?
(8 answers)
Closed 1 year ago.
I am trying to git add, commit, push an update made to some Python code, where I changed the naming convention of the files.
NB: I want my local branch to replace the remote version.
I have also deleted these files from data/ folder. However, git push and git push --force yield the same error:
remote: error: File workers/compositekey_worker/compositekey/data/20210617-031807_dataset_.csv is 203.87 MB; this exceeds GitHub's file size limit of 100.00 MB
remote: error: File workers/compositekey_worker/compositekey/data/20210617-032600_dataset_.csv is 180.20 MB; this exceeds GitHub's file size limit of 100.00 MB
But data/ only contains example datasets from online:
$ ls
MFG10YearTerminationData.csv OPIC-scraped-portfolio-public.csv
Is the problem to do with caching? I have limited understanding of this.
git status:
On branch simulate-data-tests
Your branch is ahead of 'origin/simulate-data-tests' by 6 commits.
(use "git push" to publish your local commits)
nothing to commit, working tree clean
git rm --cached 20210617-031807_dataset_.csv:
fatal: pathspec '20210617-031807_dataset_.csv' did not match any files
git log -- <filename> in data/:
$ git log -- 20210617-031807_dataset_.csv
commit 309e1c192387abc43d8e23f378fbb7ade45d9d3d
Author: ***
Date: Thu Jun 17 03:28:26 2021 +0100
Exception Handling of Faker methods that do not append to Dataframes. Less code, unqiueness enforced by 'faker.unique.<method>()'
commit 959aa02cdc5ea562e7d9af0c52db1ee81a5912a2
Author: ***
Date: Thu Jun 17 03:21:23 2021 +0100
Exception Handling of Faker methods that do not append to Dataframes. Less code, unqiueness enforced by 'faker.unique.<method>()'
A bit of a round about way but works for this situation effectively.
If you are sure that you want your local branch files to be in your remote branch; and have been experiencing these issues of once committed but since deleted files.
On GitHub Online, go to your folder, select you're branch.
Then "Add file" > "Upload files" file manually that you initially wanted pushed.
Then on your machine:
git checkout master
git branch -d local_branch_name
git fetch --all
I was successfully able to make a git push thereafter.

Git blame delivers different results in Gitlab CI than on local machine

I've created a small shell script that uses the output of git blame to generate some reports. This works great on my local machine. But then I wanted to integrate the shell script in our gitlab ci/cd pipeline and for some reason when I run git blame on gitlab, all lines are attributed to me and the latest commit.
This is what I put in my pipeline file to debug this issue:
run-git-blame:
stage: mystage
image: alpine
before_script:
- apk add bash git gawk
- git status
- git log
script:
- git blame HEAD -- somefile.txt
When I run git blame HEAD -- somefile.txt locally I get:
4588eb70a0b (DXXXXXXX 2019-05-07 15:35:22 +0200 1) abc=123
4588eb70a0b (DXXXXXXX 2019-05-07 15:35:22 +0200 2) def=456
4588eb70a0b (DXXXXXXX 2019-05-07 15:35:22 +0200 3) ghi=789
4588eb70a0b (DXXXXXXX 2019-05-07 15:35:22 +0200 4) jkl=abc
4588eb70a0b (DXXXXXXX 2019-05-07 15:35:22 +0200 5) mno=def
[...]
However the gitlab ci output looks like this:
^5e95b8e5 (Sebastian Gellweiler 2020-09-24 09:32:54 +0200 1) abc=123
^5e95b8e5 (Sebastian Gellweiler 2020-09-24 09:32:54 +0200 2) def=456
^5e95b8e5 (Sebastian Gellweiler 2020-09-24 09:32:54 +0200 3) ghi=789
^5e95b8e5 (Sebastian Gellweiler 2020-09-24 09:32:54 +0200 4) jkl=abc
^5e95b8e5 (Sebastian Gellweiler 2020-09-24 09:32:54 +0200 5) mno=def
[...]
Bu I have definitevly not touched the file being inspected.
The git status command outputs the following on the server:
HEAD detached at 5e95b8e5
nothing to commit, working tree clean
And this looks fine, as 5e95b8e5 is the hash of the latest commit.
What also puzzles me is the output of git log on gitlab because it only shows one commit and no history:
commit 5e95b8e5efcfd155d4248f4849b848e9f1580a20
Author: Sebastian Gellweiler <sebastian.gellweiler#dm.de>
Date: Thu Sep 24 09:32:54 2020 +0200
...
This is what the history on my local machine looks like:
commit 5e95b8e5efcfd155d4248f4849b848e9f1580a20 (HEAD -> feature/XXX)
Author: Sebastian Gellweiler <sebastian.gellweiler#example.org>
Date: Thu Sep 24 09:32:54 2020 +0200
...
commit bc689804f87d906e1b2435249fc1aaaf32e49b20
Author: Sebastian Gellweiler <sebastian.gellweiler#dm.de>
Date: Wed Sep 23 18:40:18 2020 +0200
XXX
commit 9d6fa2336aee0938aef0bd78563b6f12dee5934f (master)
Merge: 90ef282515 3eabec88c2
Author: XXXX <XXX#example.org>
Date: Thu Sep 17 08:31:20 2020 +0000
XYZ
[...]
As you can see the top commit hash is the same on my local machine and on the remote.
I'm stuck here. Anybody has an explanation for this discrepancy?
I think you have GitLab configured to make a shallow clone that has depth 1. Since it has only one commit, every file in the repository is the way it is due to the (single, one) commit that is in the repository, so that's the hash ID it produces.
In particular, this notation:
^5e95b8e5
indicates that Git knows there is something before 5e95b8e5, but not what it is, which in this case is the mark of a shallow repository. (Technically it's a boundary mark and you'd see it on some other commit if you had used a range expression, such as abc1def..HEAD. You can use the -b option, or some configuration items, to alter how these are shown.)
Git commits only connect to their parents, Git can only go through history backwards. If you have a commit checked out that's older than 5e95b8e5 you will not see it in git blame nor git log.
Alternatively, your local repo is out of datee and you need to git pull.

git clone fails when destination is a network drive

I'm having issues cloning a git repository on Windows using git in a MINGW32 shell. The basic symptom is that cloning this specific repository works fine when the destination is on my local disk but the identical command fails when the clone destination is on a network drive. Here is the local clone command (with the repo name redacted):
drichards#LT-DR MINGW32 /c/temp
$ git clone -v --progress <repo> gitrepo
Cloning into 'gitrepo'...
POST git-upload-pack (250 bytes)
remote: Counting objects: 82, done.
remote: Compressing objects: 100% (71/71), done.
remote: Total 82 (delta 26), reused 0 (delta 0)
Unpacking objects: 100% (82/82), done.
Checking connectivity... done.
Trying exactly the same procedure on a network drive results in failure. In this case drive S: is mapped to a network location.
drichards#LT-DR MINGW32 /s/temp
$ git clone -v --progress <repo> gitrepo
Cloning into 'gitrepo'...
POST git-upload-pack (250 bytes)
remote: Counting objects: 82, done.
remote: Compressing objects: 100% (71/71), done.
fatal: failed to read object 9b081a422a5f7d06ff5a2cb4889d26b4f18c6181: Permission denied
fatal: unpack-objects failed
The folder 'gitrepo' is temporarily created and then cleaned up on failure, so I can't easily pick through it to find out what went wrong. It seemed like a fairly straightforward permissions issue, so I thought attaching ProcMon would be a good solution to finding out where it had gone wrong, however, when ProcMon is running, the clone works with no problems.
This seems to imply that there is some kind of consistency problem or race condition which is occurring when the destination is a network drive. I have collected the output when the destination is the network drive with GIT_TRACE set to 1 and ProcMon not connected - that doesn't reveal anything interesting to my eyes. Here is the failure moment:
11:07:48.262535 run-command.c:336 trace: run_command: 'unpack-objects' '--pack_header=2,82'
11:07:48.324578 git.c:350 trace: built-in: git 'unpack-objects' '--pack_header=2,82'
fatal: failed to read object 9b081a422a5f7d06ff5a2cb4889d26b4f18c6181: Permission denied
fatal: unpack-objects failed
I have also tried asking git to validate objects by adding git clone -c transfer.fsckObjects=1 but that doesn't change the symptoms.
As alluded to earlier, the problem seems specific to a repository. Here are a few other data points:
The code hosting server (which is separate from all the other machines mentioned so far) has many repositories and I've cloned several others onto the network with no issue.
The issue occurs regardless of whether a regular or bare clone is made
There doesn't appear to be anything special about the object which causes the failure (the packed size is intermediate - 727 bytes)
If I attach ProcMon and disconnect whilst git is unpacking, the process still fails, usually on a different object.
A colleague also experiences the same success / fail patterns depending on the destination location.
The process still fails if I execute git from a local directory, specifying the target directory as a UNC path
Here are a few vital stats on my local machine:
$ systeminfo
<snip>
OS Name: Microsoft Windows 8.1 Pro
OS Version: 6.3.9600 N/A Build 9600
<snip>
$ uname -a
MINGW32_NT-6.3-WOW LT-DR 2.5.0(0.295/5/3) 2016-03-31 18:26 i686 Msys
$ git --version
git version 2.8.3.windows.1
The server hosting the target directories in question runs Windows Server 2008. The code server is running gitlab-ce 8.9.3.
This is where the notion of "promissor remote" comes in.
That notion is used with the transfer.fsckobjects configuration: it tells "git fetch" to validate the data and connected-ness of objects in the received pack; the code to perform this check has been taught about the narrow clone's convention that missing objects that are reachable from objects in a pack that came from a promissor remote is OK.
Before Git 2.29 (Q4 2020), while packing many objects in a repository with a promissor remote, lazily fetching missing objects from the promissor remote one by one may be inefficient.
The code now attempts to fetch all the missing objects in batch (obviously this won't work for a lazy clone that lazily fetches tree objects as you cannot even enumerate what blobs are missing until you learn which trees are missing).
That might help with your git clone from a network drive.
See commit e00549a, commit 8d5cf95 (20 Jul 2020) by Jonathan Tan (jhowtan).
(Merged by Junio C Hamano -- gitster -- in commit 5c454b3, 04 Aug 2020)
pack-objects: prefetch objects to be packed
Signed-off-by: Jonathan Tan
When an object to be packed is noticed to be missing, prefetch all to-be-packed objects in one batch.
And still with Git 2.29 (Q4 2020), "git fetch"(man) works better when the packfile URI capability is in use.
See commit 0bd96be, commit ece9aea, commit 42d418d (17 Aug 2020) by Jonathan Tan (jhowtan).
(Merged by Junio C Hamano -- gitster -- in commit bdccf5e, 03 Sep 2020)
fetch-pack: make packfile URIs work with transfer.fsckobjects
Signed-off-by: Jonathan Tan
When fetching with packfile URIs and transfer.fsckobjects=1, use the --fsck-objects instead of the --strict flag when invoking index-pack so that links are not checked, only objects.
This is because incomplete links are expected. (A subsequent connectivity check will be done when all the packs have been downloaded regardless of whether transfer.fsckobjects is set.)
This is similar to 98a2ea46c2 ("fetch-pack: do not check links for partial fetch", 2018-03-15, Git v2.17.0-rc1 -- merge), but for packfile URIs instead of partial clones.
With Git 2.31 (Q1 2021), we know more about the packfiles downloaded with the packfile URI feature.
See commit bfc2a36 (20 Jan 2021) by Jonathan Tan (jhowtan).
(Merged by Junio C Hamano -- gitster -- in commit d03553e, 03 Feb 2021)
Doc: clarify contents of packfile sent as URI
Signed-off-by: Jonathan Tan
Clarify that, when the packfile-uri feature is used, the client should not assume that the extra packfiles downloaded would only contain a single blob, but support packfiles containing multiple objects of all types.
technical/packfile-uri now includes in its man page:
blobs are excluded, replaced with URIs. As noted in "Future work" below, the
server can evolve in the future to support excluding other objects (or other
implementations of servers could be made that support excluding other objects)
without needing a protocol change, so clients should not expect that packfiles
downloaded in this way only contain single blobs.
With Git 2.33 (Q3 2021), the description of uploadpack.blobPackfileUri of a packfile-uri (used for fetching, including with a network drive path as in the OP) is enhanced:
See commit 3127ff9 (13 May 2021) by Teng Long (dyrone).
(Merged by Junio C Hamano -- gitster -- in commit 8e1d2fc, 10 Jun 2021)
packfile-uri.txt: fix blobPackfileUri description
Signed-off-by: Teng Long
Reviewed-by: Jonathan Tan
Fix the 'uploadpack.blobPackfileUri' description in packfile-uri.txt and the correct format also can be seen in t5702.
technical/packfile-uri now includes in its man page:
server to be configured by one or more uploadpack.blobPackfileUri= <object-hash> <pack-hash> <uri> entries.
Whenever the list of objects to be
sent is assembled, all such blobs are excluded, replaced with URIs.
As noted in "Future work" below, the server can evolve in the future to support excluding other objects (or other implementations of servers could be made that support excluding other objects) without needing a protocol change, so clients should not expect that packfiles downloaded in this way only contain single blobs.

Teamcity Configuration Settings

I need to know the teamcity settings which prevents the re-trigger/trigger of outdated builds/jobs if the new builds are successful.
I am facing a issue where teamcity jobs can be re triggered even if the next builds are successful.And If the trigger event is fired before, then it must stop teamcity to run that job if the latest build is successful.
So I have to 2 jobs in TC for 1 branch -- Build-Precheck and the other is Build-compile
So I could see that Build-compile is just picking the latest available successful build from Build-Precheck and then queing up the next which may be the outdated build.
Build-Precheck is just taking 2 min to finish the builds , it quickly triggers the latest builds , I guess following the principal First In First Out
Build-Precheck
06 Oct 14 14:33 - 14:35 (2m:01s) –7.1.4345
06 Oct 14 14:41 - 14:43 (2m:16s)- 7.1.4346
06 Oct 14 14:45 - 14:47 (2m:10s)- 7.1.4347
Build-compile
06 Oct 14 14:35 - 15:00 -7.1.0.4345
06 Oct 14 14:52 - 15:20 (28m:02s)- 7.1.4347
06 Oct 14 16:08 - 16:33 (24m:52s)- 7.1.4346
Is there any fix for this that TC runs incremental builds rather than outdated ones
Sounds like you are looking for Configuring Build Trigger.
AFAIK, there isn't a way to cancel queued builds if a given build passes. However, you can adjust the Build Triggers that queue those builds. Most likely, you'll need to set the Quiet Period on your VCS Build Trigger to longer than it takes for your build.
For example, if your full build takes 5 minutes, you should set the Quiet Period to 7. This way additional builds wont queue while a build is running.

msysgit: option to not set hidden flag

I'm using msysgit and for files starting with a slash, e.g. .classpath it automatically sets the hidden flag which makes it impossible for IDEs to overwrite it. How to prevent setting this hidden flag?
Add --global to make this the default behaviour for all new repos:
git config --global core.hidedotfiles "false"
git config core.hidedotfiles "false"
Make sure to use Git 2.22 (Q2 2019) when setting the core.hidedotfiles, before creating new repositories.
Before, "git init" forgot to read platform-specific repository configuration, which made Windows port to ignore settings of core.hidedotfiles, for example.
See commit 2878533 (11 Mar 2019) by Johannes Schindelin (dscho).
(Merged by Junio C Hamano -- gitster -- in commit 6364386, 16 Apr 2019)
mingw: respect core.hidedotfiles = false in git init again
This is a brown paper bag.
When adding the tests, we actually failed to verify that the config variable is heeded in git init at all.
And when changing the original patch that marked the .git/ directory as hidden after reading the config, it was lost on this developer that the new code would use the hide_dotfiles variable before the config was read.
The fix is obvious: read the (limited, pre-init) config before
creating the .git/ directory.
Please note that we cannot remove the identical-looking git_config() call from create_default_files(): we create the .git/ directory between those calls.
If we removed it, and if the parent directory is in a Git worktree, and if that worktree's .git/config contained any init.templatedir setting, we would all of a sudden pick that up.
This fixes git-for-windows#789
This is more robust with Git 2.32 (Q2 2021), where some leaks are plugged.
See commit 68ffe09, commit 64cc539, commit 0171dbc (21 Mar 2021), and commit 04fe4d7, commit e4de450, commit aa1b639, commit 0c45427, commit e901de6, commit f63b888 (14 Mar 2021) by Andrzej Hunt (ahunt).
(Merged by Junio C Hamano -- gitster -- in commit 642a400, 07 Apr 2021)
init: remove git_init_db_config() while fixing leaks
Signed-off-by: Andrzej Hunt
The primary goal of this change is to stop leaking init_db_template_dir.
This leak can happen because:
git_init_db_config() allocates new memory into init_db_template_dir without first freeing the existing value.
init_db_template_dir might already contain data, either because: 2.1 git_config() can be invoked twice with this callback in a single process - at least 2 allocations are likely.
2.2 A single git_config() allocation can invoke the callback multiple times for a given key (see further explanation in the function docs) - each of those calls will trigger another leak.
The simplest fix for the leak would be to free(init_db_template_dir) before overwriting it.
Instead we choose to convert to fetching init.templatedir via git_config_get_value() as that is more explicit, more efficient, and avoids allocations (the returned result is owned by the config cache, so we aren't responsible for freeing it).
If we remove init_db_template_dir, git_init_db_config() ends up being responsible only for forwarding core.* config values to platform_core_config().
However platform_core_config() already ignores non-core.* config values, so we can safely remove git_init_db_config() and invoke git_config() directly with platform_core_config() as the callback.
The platform_core_config forwarding was originally added in:
2878533 (mingw: respect core.hidedotfiles = false in git-init again, 2019-03-11, Git v2.22.0-rc0 -- merge listed in batch #5)
And I suspect the potential for a leak existed since the original implementation of git_init_db_config in: 90b4518 ("Add init.templatedir configuration variable.", 2010-02-17, Git v1.7.1-rc0 -- merge)

Resources