git clone fails when destination is a network drive - windows

I'm having issues cloning a git repository on Windows using git in a MINGW32 shell. The basic symptom is that cloning this specific repository works fine when the destination is on my local disk but the identical command fails when the clone destination is on a network drive. Here is the local clone command (with the repo name redacted):
drichards#LT-DR MINGW32 /c/temp
$ git clone -v --progress <repo> gitrepo
Cloning into 'gitrepo'...
POST git-upload-pack (250 bytes)
remote: Counting objects: 82, done.
remote: Compressing objects: 100% (71/71), done.
remote: Total 82 (delta 26), reused 0 (delta 0)
Unpacking objects: 100% (82/82), done.
Checking connectivity... done.
Trying exactly the same procedure on a network drive results in failure. In this case drive S: is mapped to a network location.
drichards#LT-DR MINGW32 /s/temp
$ git clone -v --progress <repo> gitrepo
Cloning into 'gitrepo'...
POST git-upload-pack (250 bytes)
remote: Counting objects: 82, done.
remote: Compressing objects: 100% (71/71), done.
fatal: failed to read object 9b081a422a5f7d06ff5a2cb4889d26b4f18c6181: Permission denied
fatal: unpack-objects failed
The folder 'gitrepo' is temporarily created and then cleaned up on failure, so I can't easily pick through it to find out what went wrong. It seemed like a fairly straightforward permissions issue, so I thought attaching ProcMon would be a good solution to finding out where it had gone wrong, however, when ProcMon is running, the clone works with no problems.
This seems to imply that there is some kind of consistency problem or race condition which is occurring when the destination is a network drive. I have collected the output when the destination is the network drive with GIT_TRACE set to 1 and ProcMon not connected - that doesn't reveal anything interesting to my eyes. Here is the failure moment:
11:07:48.262535 run-command.c:336 trace: run_command: 'unpack-objects' '--pack_header=2,82'
11:07:48.324578 git.c:350 trace: built-in: git 'unpack-objects' '--pack_header=2,82'
fatal: failed to read object 9b081a422a5f7d06ff5a2cb4889d26b4f18c6181: Permission denied
fatal: unpack-objects failed
I have also tried asking git to validate objects by adding git clone -c transfer.fsckObjects=1 but that doesn't change the symptoms.
As alluded to earlier, the problem seems specific to a repository. Here are a few other data points:
The code hosting server (which is separate from all the other machines mentioned so far) has many repositories and I've cloned several others onto the network with no issue.
The issue occurs regardless of whether a regular or bare clone is made
There doesn't appear to be anything special about the object which causes the failure (the packed size is intermediate - 727 bytes)
If I attach ProcMon and disconnect whilst git is unpacking, the process still fails, usually on a different object.
A colleague also experiences the same success / fail patterns depending on the destination location.
The process still fails if I execute git from a local directory, specifying the target directory as a UNC path
Here are a few vital stats on my local machine:
$ systeminfo
<snip>
OS Name: Microsoft Windows 8.1 Pro
OS Version: 6.3.9600 N/A Build 9600
<snip>
$ uname -a
MINGW32_NT-6.3-WOW LT-DR 2.5.0(0.295/5/3) 2016-03-31 18:26 i686 Msys
$ git --version
git version 2.8.3.windows.1
The server hosting the target directories in question runs Windows Server 2008. The code server is running gitlab-ce 8.9.3.

This is where the notion of "promissor remote" comes in.
That notion is used with the transfer.fsckobjects configuration: it tells "git fetch" to validate the data and connected-ness of objects in the received pack; the code to perform this check has been taught about the narrow clone's convention that missing objects that are reachable from objects in a pack that came from a promissor remote is OK.
Before Git 2.29 (Q4 2020), while packing many objects in a repository with a promissor remote, lazily fetching missing objects from the promissor remote one by one may be inefficient.
The code now attempts to fetch all the missing objects in batch (obviously this won't work for a lazy clone that lazily fetches tree objects as you cannot even enumerate what blobs are missing until you learn which trees are missing).
That might help with your git clone from a network drive.
See commit e00549a, commit 8d5cf95 (20 Jul 2020) by Jonathan Tan (jhowtan).
(Merged by Junio C Hamano -- gitster -- in commit 5c454b3, 04 Aug 2020)
pack-objects: prefetch objects to be packed
Signed-off-by: Jonathan Tan
When an object to be packed is noticed to be missing, prefetch all to-be-packed objects in one batch.
And still with Git 2.29 (Q4 2020), "git fetch"(man) works better when the packfile URI capability is in use.
See commit 0bd96be, commit ece9aea, commit 42d418d (17 Aug 2020) by Jonathan Tan (jhowtan).
(Merged by Junio C Hamano -- gitster -- in commit bdccf5e, 03 Sep 2020)
fetch-pack: make packfile URIs work with transfer.fsckobjects
Signed-off-by: Jonathan Tan
When fetching with packfile URIs and transfer.fsckobjects=1, use the --fsck-objects instead of the --strict flag when invoking index-pack so that links are not checked, only objects.
This is because incomplete links are expected. (A subsequent connectivity check will be done when all the packs have been downloaded regardless of whether transfer.fsckobjects is set.)
This is similar to 98a2ea46c2 ("fetch-pack: do not check links for partial fetch", 2018-03-15, Git v2.17.0-rc1 -- merge), but for packfile URIs instead of partial clones.
With Git 2.31 (Q1 2021), we know more about the packfiles downloaded with the packfile URI feature.
See commit bfc2a36 (20 Jan 2021) by Jonathan Tan (jhowtan).
(Merged by Junio C Hamano -- gitster -- in commit d03553e, 03 Feb 2021)
Doc: clarify contents of packfile sent as URI
Signed-off-by: Jonathan Tan
Clarify that, when the packfile-uri feature is used, the client should not assume that the extra packfiles downloaded would only contain a single blob, but support packfiles containing multiple objects of all types.
technical/packfile-uri now includes in its man page:
blobs are excluded, replaced with URIs. As noted in "Future work" below, the
server can evolve in the future to support excluding other objects (or other
implementations of servers could be made that support excluding other objects)
without needing a protocol change, so clients should not expect that packfiles
downloaded in this way only contain single blobs.
With Git 2.33 (Q3 2021), the description of uploadpack.blobPackfileUri of a packfile-uri (used for fetching, including with a network drive path as in the OP) is enhanced:
See commit 3127ff9 (13 May 2021) by Teng Long (dyrone).
(Merged by Junio C Hamano -- gitster -- in commit 8e1d2fc, 10 Jun 2021)
packfile-uri.txt: fix blobPackfileUri description
Signed-off-by: Teng Long
Reviewed-by: Jonathan Tan
Fix the 'uploadpack.blobPackfileUri' description in packfile-uri.txt and the correct format also can be seen in t5702.
technical/packfile-uri now includes in its man page:
server to be configured by one or more uploadpack.blobPackfileUri= <object-hash> <pack-hash> <uri> entries.
Whenever the list of objects to be
sent is assembled, all such blobs are excluded, replaced with URIs.
As noted in "Future work" below, the server can evolve in the future to support excluding other objects (or other implementations of servers could be made that support excluding other objects) without needing a protocol change, so clients should not expect that packfiles downloaded in this way only contain single blobs.

Related

files don't exist yet "this exceeds GitHub's file size limit of 100.00 MB" [duplicate]

This question already has answers here:
How to remove file from Git history?
(8 answers)
Closed 1 year ago.
I am trying to git add, commit, push an update made to some Python code, where I changed the naming convention of the files.
NB: I want my local branch to replace the remote version.
I have also deleted these files from data/ folder. However, git push and git push --force yield the same error:
remote: error: File workers/compositekey_worker/compositekey/data/20210617-031807_dataset_.csv is 203.87 MB; this exceeds GitHub's file size limit of 100.00 MB
remote: error: File workers/compositekey_worker/compositekey/data/20210617-032600_dataset_.csv is 180.20 MB; this exceeds GitHub's file size limit of 100.00 MB
But data/ only contains example datasets from online:
$ ls
MFG10YearTerminationData.csv OPIC-scraped-portfolio-public.csv
Is the problem to do with caching? I have limited understanding of this.
git status:
On branch simulate-data-tests
Your branch is ahead of 'origin/simulate-data-tests' by 6 commits.
(use "git push" to publish your local commits)
nothing to commit, working tree clean
git rm --cached 20210617-031807_dataset_.csv:
fatal: pathspec '20210617-031807_dataset_.csv' did not match any files
git log -- <filename> in data/:
$ git log -- 20210617-031807_dataset_.csv
commit 309e1c192387abc43d8e23f378fbb7ade45d9d3d
Author: ***
Date: Thu Jun 17 03:28:26 2021 +0100
Exception Handling of Faker methods that do not append to Dataframes. Less code, unqiueness enforced by 'faker.unique.<method>()'
commit 959aa02cdc5ea562e7d9af0c52db1ee81a5912a2
Author: ***
Date: Thu Jun 17 03:21:23 2021 +0100
Exception Handling of Faker methods that do not append to Dataframes. Less code, unqiueness enforced by 'faker.unique.<method>()'
A bit of a round about way but works for this situation effectively.
If you are sure that you want your local branch files to be in your remote branch; and have been experiencing these issues of once committed but since deleted files.
On GitHub Online, go to your folder, select you're branch.
Then "Add file" > "Upload files" file manually that you initially wanted pushed.
Then on your machine:
git checkout master
git branch -d local_branch_name
git fetch --all
I was successfully able to make a git push thereafter.

git clone hangs at "checking connectivity"

OS - Windows 7 professional 64 bit
GIT for windows - Git-1.9.0 - Using Git bash
I started having problems with "git fetch" suddenly out of nowhere.
Sometimes git.exe would error out and sometimes the "git fetch" would just hang.
So I decided to start everything from scratch.
I uninstalled git for windows and reinstalled it (accepting all defaults), restarted the machine. Created a brand new folder and did the following
$ git clone git#github.com:myid#example.com/myproject.git
Cloning into 'myproject'...
Enter passphrase for key '/c/Users/myid/.ssh/id_rsa':
remote: Counting objects: 287209, done.
remote: Compressing objects: 100% (86467/86467), done.
remote: Total 287209 (delta 188451), reused 287209 (delta 188451)
Receiving objects: 100% (287209/287209), 168.89 MiB | 328.00 KiB/s, done.
Resolving deltas: 100% (188451/188451), done.
Checking connectivity...
It consistently just hangs at "checking connectivity"
I have scanned the machine for viruses/trojans what have you and no threats were found.
This is happening both at work location and from home - So its probably not the internet.
I'm not sure how to proceed or what to try next.
I removed the known_hosts file from my ~/.ssh folder, which did the trick. Everything works now.
This message is not related to network connectivity. This is about checking whether every object is connected to an existing reference.
Detailed answer can be found on superuser
Try to run git with setting environment variable GIT_CURL_VERBOSE=1 to see what is going on.
This should improve with Git 2.34 (Q4 2021), where the code that handles large number of refs in the "git fetch"(man) code path has been optimazed.
See commit caff8b7, commit 1c7d1ab, commit 284b2ce, commit 62b5a35, commit 9fec7b2, commit 47c6100, commit fe7df03 (01 Sep 2021) by Patrick Steinhardt (pks-t).
(Merged by Junio C Hamano -- gitster -- in commit deec8aa, 20 Sep 2021)
fetch: avoid second connectivity check if we already have all objects
Signed-off-by: Patrick Steinhardt
When fetching refs, we are doing two connectivity checks:
The first one is done such that we can skip fetching refs in the case where we already have all objects referenced by the updated set of refs.
The second one verifies that we have all objects after we have fetched objects.
We always execute both connectivity checks, but this is wasteful in case the first connectivity check already notices that we have all objects locally available.
Skip the second connectivity check in case we already had all objects available.
This gives us a nice speedup when doing a mirror-fetch in a repository with about 2.3M refs where the fetching repo already has all objects:
Benchmark #1: HEAD~: git-fetch
Time (mean ± σ): 30.025 s ± 0.081 s [User: 27.070 s, System: 4.933 s]
Range (min … max): 29.900 s … 30.111 s 5 runs
Benchmark #2: HEAD: git-fetch
Time (mean ± σ): 25.574 s ± 0.177 s [User: 22.855 s, System: 4.683 s]
Range (min … max): 25.399 s … 25.765 s 5 runs
Summary
'HEAD: git-fetch' ran
1.17 ± 0.01 times faster than 'HEAD~: git-fetch'
You should execute "git prune".

Clearcase UCM: rebase fails - error (apparently) is a lie - what's the root cause?

I have a rather standard setup with a development stream (devStream) that delivers to and rebases from an integration stream (intStream).
I have a file in my development view (devView) that I can checkout, modify, and checkin normally.
However, when I attempt to deliver (and later rebase) however I get an error that baffles me... (especially as I have been delivering and rebasing for the past 6 months) It may be worth noting that we recently upgraded from Rhapsody 8.0.2 to 8.0.4 (and correspondingly upgraded the diffmerge tool that clearcase's map file points to for rhapsody files), however given when the errors are arising I can't see how this could be at fault.
Since the graphical mode can be hard to get enough debug info from I captured the results of some command line runs.
Here's the (annonymized) result for
starting a rebase
cleartool> rebase -recommended
*SNIP*
Creating integration activity...
Setting integration activity...
Merging files...
Checked out "C:\CCVs\myDevView\shortenedPath\theFile.sbs" from version "\main\intStream\devStream\9".
Attached activity:
activity:NSSLB00001350#\projects "rebase devStream on 20131120.195128."
Needs Merge "C:\CCVs\myDevView\shortenedPath\theFile.sbs" [to \main\intStream\devStream\CHECKEDOUT from \main\intStream\9 base \main\intStream\8]
cleartool: Error: Unable to access "C:\CCVs\myDevView\shortenedPath\theFile.sbs": No such file or directory.
cleartool: Error: An error occurred while merging file elements in the target view.
cleartool: Error: Unable to perform merge.
cleartool: Error: Unable to perform integration.
cleartool: Error: Unable to rebase stream "devStream".
attempting to resume the rebase
cleartool> rebase -resume
Rebase in progress on stream "devStream".
Started by "XXXXX" at 11/20/2013 7:51:28 PM.
Merging files...
cleartool: Error: Unable to access "C:\CCVs\myDevView\shortenedPath\theFile.sbs": No such file or directory.
cleartool: Error: Some files are already checked out to a non-integration activity in the target view.
cleartool: Error: Unable to perform merge.
cleartool: Error: Unable to perform integration.
cleartool: Error: Some files are already checked out to a non-integration activity in the target view.
cleartool: Error: Unable to resume rebase.
listing the information associated with the thusly created integration rebase activity
cleartool> lsactivity -long NSSLB00001350
activity "NSSLB00001350"
2013-11-20T19:52:00-06:00 by XXXXXX
"Integration activity created by rebase on 11/20/2013 7:51:28 PM.
"
owner: XXXXX
group: XXXXX
stream: devStream#\projects
current view: myDevView
title: rebase devStream on 20131120.195128.
change set versions:
C:\CCVs\myDevView\shortenedPath\theFile.sbs##\main\intStream\devStream\CHECKEDOUT.94426
clearquest record id: NSSLB00001350
clearquest record State: Active
cleartool>
listing checkouts for anyone, on any stream, anywhere in this portion of the directory structure
cleartool> lsco -r
--11-20T19:51 XXXXX checkout version ".\shortenedPath\theFile.sbs" from \main\intStream\devStream\9 (reserved)
Attached activity:
activity:NSSLB00001350#\projects "rebase devStream on 20131120.195128."
At this point I'm confused about how to proceed. Googling isn't bringing up anything that seems relevant (or maybe my google skills are weak).
It's also worth pointing out that my clearcase skills are entirely self-taught on an as-needed basis... so I'm sure I've got holes in my knowledge. Meaining even if something seems like it would have been obvious to do, please point it out; I may be unaware.
Requested Info
With no rebase being attempted
C:\CCVs\myDevView\shortenedPath>cleartool ls
theFile.sbs##\main\intStream\devStream\9 Rule: ...\devStream\LATEST
theFile.sbs.merge
theFile.sbs.merge.1
theFile.sbs.merge.2
theFile.sbs.merge.3
theFile.sbs.merge.4
*snip (other files)*
In middle of failed rebase
theFile.sbs##\main\intStream\devStream\CHECKEDOUT from \main\intStream\devStream\9 [not loaded, checkedout but removed]
*snip*
theFile.sbs.merge.5
So... the rebase is doing something odd to the file; but why does it dissapear when doing a rebase/deliver and not when doing a normal checkout?
To debug this, you must go to the command line, in a shell, and go to the parent folder of the missing file:
cd /path/to/target/view/path/to/parent/folder
# in your case
cd C:\CCVs\myDevView\shortenedPath\
cleartool ls
cleartool lsvtree -graph .
The status of the file returned by the cleartool ls can give you a clue as to what is going on.
For instance "checkout but removed" would means the mergetool tried to access/open that file, but somehow it was deleted: that happens when said file is taken by a process, and cannot be completely checked out.
The lsvtree can also give you clues regarding the parent folder (to see if it was merged or not).
Another approach is to cancel that rebase, and try it again in a dynamic view instead of a snapshot view, in order to avoid any side-effect with a snapshot view not correctly updated.
The OP Khanmots concludes in the comments:
I undid the rebase (to get a copy of the file) and started it again.
When it bombed out I copied the file back in then hit to restart the rebase. Deleted it again.
I then replaced the file while leaving the prompt to start the diffmerge tool open, this allowed diffmerge to actually launch... but when I tell the diffmerge to save (after resolving differences) it deletes the file and creates another .merge.# file.
At this point it's looking like it's a diffmerge issue and not a clearcase issue.

Git on Windows, "Out of memory - malloc failed"

Have run into a problem with repository and tried almost every possible config setting found out there eg. pack.WindowMemory etc etc
I believe someone has checked in a large file to remote repository and now each time I try and pull or push to it, GIT tries to pack it and runs out of memory:
Auto packing the repository for optimum performance. You may also
run "git gc" manually. See "git help gc" for more information.
Counting objects: 6279, done.
Compressing objects: 100% (6147/6147), done.
fatal: Out of memory, malloc failed (tried to allocate 1549040327 bytes)
error: failed to run repack
Have tried git gc & git repack with various options but keeps returning same error.
Almost given up and about to just create a new repo but thought I'd ask around first :)
I found a solution Here that worked for me.
In .git/config file (client and/or server) I added the following:
[core]
packedGitLimit = 128m
packedGitWindowSize = 128m
[pack]
deltaCacheSize = 128m
packSizeLimit = 128m
windowMemory = 128m
For reference (you might already seen it), the msysgit case dealing with that issue is the ticket 292.
It suggests several workarounds:
Disable delta compression globally. For this you have to set pack.window to 0. Of course this will make the repository much larger on disc.
Disable delta compression for some files. Check the delta flag on the manpage to gitattributes.
git config --global pack.threads 1
git config --global pack.windowMemory 256m (you already tried that one, but also illustrated in "Error when pulling warning: suboptimal pack - out of memory")
other settings are mentioned in "git push fatal: unable to create thread: Resource temporarily unavailable" and "Git pull fails with bad pack header error" in case this is pack-related.
sm4 adds in the comments:
To disable the delta compression for certain files, in .git/info/attributes, add:
*.zip binary -delta
From Gitattributes man page:
Delta compression will not be attempted for blobs for paths with the attribute delta set to false.
Maybe a simpler workaround would be to somehow reset the history before that large file commit, and redo the other commits from there.
EDIT:  Since git-v2.5.0 (Aug/2015), git-for-windows (formerly MSysGit)
      provides 64-bits versions as noticed by Pan.student.
      In this answer I was advising to install Cygwin 64-bits (providing 64-bits Git version).
I got a similar Out of memory, malloc failed issue using MSysGit when reaching the 4GB barrier:
> git --version
git version 1.8.3.msysgit.0
> file path/Git/cmd/git
path/Git/cmd/git: PE32 executable for MS Windows (console) Intel 80386 32-bit
> time git clone --bare -v ssh://linuxhost/path/repo.git
Cloning into bare repository 'repo.git'...
remote: Counting objects: 1664490, done.
remote: Compressing objects: 100% (384843/384843), done.
remote: Total 1664490 (delta 1029586), reused 1664490 (delta 1029586)
Receiving objects: 100% (1664490/1664490), 550.96 MiB | 1.55 MiB/s, done.
Resolving deltas: 100% (1029586/1029586), done.
fatal: Out of memory, malloc failed (tried to allocate 4691583 bytes)
fatal: remote did not send all necessary objects
real 13m8.901s
user 0m0.000s
sys 0m0.015s
Finally git 64 bits from Cygwin fix it:
> git --version
git version 1.7.9
> file /usr/bin/git
/usr/bin/git: PE32+ executable (console) x86-64 (stripped to external PDB), for MS Windows
> time git clone --bare -v ssh://linuxhost/path/repo.git
Cloning into bare repository 'repo.git'...
remote: Counting objects: 1664490, done.
remote: Compressing objects: 100% (384843/384843), done.
remote: Total 1664490 (delta 1029586), reused 1664490 (delta 1029586)
Receiving objects: 100% (1664490/1664490), 550.96 MiB | 9.19 MiB/s, done.
Resolving deltas: 100% (1029586/1029586), done.
real 13m9.451s
user 3m2.488s
sys 3m53.234s
FYI on linuxhost 64 bits:
repo.git> git config -l
user.email=name#company.com
core.repositoryformatversion=0
core.filemode=true
core.bare=true
repo.git> git --version
git version 1.8.3.4
repo.git> uname -a
Linux linuxhost 2.6.32-279.19.1.el6.x86_64 #1 SMP Sat Nov 24 14:35:28 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
If my answer does not fix your issue, you may also check these pages:
git clone out of memory even with 5.6GB RAM free and 50 GB hard disk
Git clone fails with out of memory error - “fatal: out of memory, malloc failed (tried to allocate 905574791 bytes) / fatal: index-pack failed”
git-clone memory allocation error
MSysGit issues tracker
Some of the options suggested in the selected answer seem to be only partially relevant to the issue or not necessary at all.
From looking at https://git-scm.com/docs/git-config, it appears that just setting the following option is sufficient (set only for the project here):
git config pack.windowMemory 512m
From the manual:
pack.windowMemory
The maximum size of memory that is consumed by each thread in git-pack-objects[1] for pack window memory when no limit is given on the command line. The value can be suffixed with "k", "m", or "g". When left unconfigured (or set explicitly to 0), there will be no limit.
With this, I never went over the specified 512m per thread, actually used RAM was about half of that most of the time. Of course, the amount chosen here is user-specific depending on the available RAM and number of threads.
This worked for me, but I had to set the options via the command line using:
git --global core\pack [param] value

msysgit: option to not set hidden flag

I'm using msysgit and for files starting with a slash, e.g. .classpath it automatically sets the hidden flag which makes it impossible for IDEs to overwrite it. How to prevent setting this hidden flag?
Add --global to make this the default behaviour for all new repos:
git config --global core.hidedotfiles "false"
git config core.hidedotfiles "false"
Make sure to use Git 2.22 (Q2 2019) when setting the core.hidedotfiles, before creating new repositories.
Before, "git init" forgot to read platform-specific repository configuration, which made Windows port to ignore settings of core.hidedotfiles, for example.
See commit 2878533 (11 Mar 2019) by Johannes Schindelin (dscho).
(Merged by Junio C Hamano -- gitster -- in commit 6364386, 16 Apr 2019)
mingw: respect core.hidedotfiles = false in git init again
This is a brown paper bag.
When adding the tests, we actually failed to verify that the config variable is heeded in git init at all.
And when changing the original patch that marked the .git/ directory as hidden after reading the config, it was lost on this developer that the new code would use the hide_dotfiles variable before the config was read.
The fix is obvious: read the (limited, pre-init) config before
creating the .git/ directory.
Please note that we cannot remove the identical-looking git_config() call from create_default_files(): we create the .git/ directory between those calls.
If we removed it, and if the parent directory is in a Git worktree, and if that worktree's .git/config contained any init.templatedir setting, we would all of a sudden pick that up.
This fixes git-for-windows#789
This is more robust with Git 2.32 (Q2 2021), where some leaks are plugged.
See commit 68ffe09, commit 64cc539, commit 0171dbc (21 Mar 2021), and commit 04fe4d7, commit e4de450, commit aa1b639, commit 0c45427, commit e901de6, commit f63b888 (14 Mar 2021) by Andrzej Hunt (ahunt).
(Merged by Junio C Hamano -- gitster -- in commit 642a400, 07 Apr 2021)
init: remove git_init_db_config() while fixing leaks
Signed-off-by: Andrzej Hunt
The primary goal of this change is to stop leaking init_db_template_dir.
This leak can happen because:
git_init_db_config() allocates new memory into init_db_template_dir without first freeing the existing value.
init_db_template_dir might already contain data, either because: 2.1 git_config() can be invoked twice with this callback in a single process - at least 2 allocations are likely.
2.2 A single git_config() allocation can invoke the callback multiple times for a given key (see further explanation in the function docs) - each of those calls will trigger another leak.
The simplest fix for the leak would be to free(init_db_template_dir) before overwriting it.
Instead we choose to convert to fetching init.templatedir via git_config_get_value() as that is more explicit, more efficient, and avoids allocations (the returned result is owned by the config cache, so we aren't responsible for freeing it).
If we remove init_db_template_dir, git_init_db_config() ends up being responsible only for forwarding core.* config values to platform_core_config().
However platform_core_config() already ignores non-core.* config values, so we can safely remove git_init_db_config() and invoke git_config() directly with platform_core_config() as the callback.
The platform_core_config forwarding was originally added in:
2878533 (mingw: respect core.hidedotfiles = false in git-init again, 2019-03-11, Git v2.22.0-rc0 -- merge listed in batch #5)
And I suspect the potential for a leak existed since the original implementation of git_init_db_config in: 90b4518 ("Add init.templatedir configuration variable.", 2010-02-17, Git v1.7.1-rc0 -- merge)

Resources