I am trying to maintain my changes to config files of resource.bundle directories from remote cocoapods repositories.
While working on the implementation I am able to make changes locally but I do not own the external repository.I would like to be able to refer to the code owners tags implementing the pods from their repos in my project while maintaining my configuration changes.
It has been suggested to me to create a script phase in my build process that would copy files from a "assets folder" within the project to the finished pod directory after the remote pull and build.
This sounds feasible but I am not sure where to start in this process or what the script would like.
essentially I would have a
root/assetsfolder/resource.bundle
that would need to be copied to
Pods/ExternalPodName/Core/resource.bundle
Any help would be appreciated.
I want to build and deploy my projects with GitLab's pipeline hosted locally and solely for build/deploy (sources hosted elsewhere). Everything GitLab related nicely stored within my_gitlab folder with:
my_gitlab
├── config
├── data
├── docker-compose.yaml
└── logs
in it and runs with single docker-compose up -d command. Runners, users, keys, etc. is all setup and persist between reboots. my_gitlab occupies 764 Kb disk space and can be pushed to git repo to share local build/deploy functionality.
The only problem is that you cannot initiate pipeline by pointing to sources directory - you need to push sources to thus locally hosted GitLab with .gitlab-ci.yml in it. Each such push causes my_gitlab dir to grow up to 200 Mb+ in size.
Is there a way to strip repositories data from GitLab or initiate pipeline without pushing code? Is it even somewhat OK usage of GitLab?
You can use GitLabs Interface to start a new Pipeline without pushing any code.
On the left side in your project go to CI/CD -> Pipelines -> Run Pipeline and select your branch.
I am using multiple packages that I import into different projects, these range from custom adapters for my business logic that are shared by lambda and google cloud functions and other public packages. The way I do this right now is that I vendor them and include them for cloud functions. For applications that can be compiled and deployed on a VM, I compile them separately. This works fine for me, however, its a pain developing these modules.
If I update the method signature and names in the package, I have to push my changes to github / gitlab (my package path is something like gitlab.com/groupName/projectName/pkg/packageName) and then do a go get -u <pacakgeName> to update the package.
This also, does not really update it, sometimes am stuck with an older version with no idea on how to update it. Is there an easier way of working with this I wonder.
For sake of clarity:
Exported package 1
Path: gitlab.com/some/name/group/pkg/clients/psql
psql-client
|
|_ pkg
|
|_psql.go
Application 1 uses psql-client
Path: gitlab.com/some/name/app1
Application 2 uses psql-client
Path: gitlab.com/some/name/app2
My understanding is that (a) you are using the new Go modules system, and that (b) part of the problem is that you don't want to keep pushing changes to github or gitlab across different repositories when you are doing local development.
In other words, if you have your changes locally, it sounds like you don't want to round-trip those changes through github/gitlab in order for those changes to be visible across your related repositories that you are working on locally.
Most important advice
It is greatly complicating your workflow to have > 1 module in a single repository.
As is illustrated by your example, in general it is almost always more work on an on-going basis to have > 1 module in a single repository. It is also very hard to get right. For most people, the cost is almost always not worth it. Also, often the benefit is not what people expect, or in some cases, there is no practical benefit to having > 1 module in a repo.
I would definitely recommend you follow the commonly followed rule of "1 repo == 1 module", at least for now. This answer has more details about why.
Working with multiple repos
Given you are using Go modules, one approach is you can add a replace directive to a module's go.mod file that informs that Go module about the on-disk location of other Go modules.
Example structure
For example, if you had three repos repo1, repo2, repo3, you could clone them so that they all sit next to each other on your local disk:
myproject/
├── repo1
├── repo2
└── repo3
Then, if repo1 depends on repo2 and repo3, you could set the go.mod file for repo1 to know the relative on-disk location of the other two modules:
repo1 go.mod:
replace github.com/me/repo2 => ../repo2
replace github.com/me/repo3 => ../repo3
When you are inside the repo1 directory or any of its child directories, a go command like go build or go test ./.... will use the on-disk versions of repo2 and repo3.
repo2 go.mod:
If repo2 depends on repo3, you could also set:
replace github.com/me/repo3 => ../repo3
repo3 go.mod:
If for example repo3 does not depend on either of repo1 or repo2, then you would not need to add a replace to its go.mod.
Additional details
The replace directive is covered in more detail in the replace FAQ on the Modules wiki.
Finally, it depends on your exact use case, but a common solution at this point is to use gohack, which automates some of this process. In particular, it creates a mutable copy of a dependency (by default in $HOME/gohack, but the location is controlled by $GOHACK variable). gohackalso sets your current go.mod file to have a replace directive to point to that mutable copy.
go get is transitive, so you can just add it to your build process. A typical Go project build is basically:
go get -u ./... && go test ./... && go build ./cmd/myapp
Which gets & updates dependencies, runs all project tests, then builds the binary.
I have configured the Build steps as below
Created another Build Configuration (e.g. named "Send to SonarQube") and added the dependency on initial configuration
An artifact dependency for ".teamcity/.NETCoverage/dotCover.dcvr" file and getting artifacts from "Build from the same chain".
In the new configuration ("Send to SonarQube") added a Command Line step with the following script:
%teamcity.dotCover.home%\dotCover.exe report /ReportType=HTML /Source="dotCover.dcvr" /Output="dotCover.html"
Added SonarQube Runner to the new configuration and added additional command line argument with "-Dsonar.cs.dotcover.reportsPaths=dotCover.html"
Please suggest
Note: When i have checked the dotCover.html the coverage is showing perfectly. But the sonarqube is showing as 0% covered
Since you are using build chains, you are probably switching directories and SonarQube uses absolute paths. To confirm this, look at the html/[nnn].html files in your working directory. In html -> head -> title, does the absolute path match the source code in your current working directory when you run the report command?
So to summarize, in your "Send to SonarQube", you need to ensure:
You have your source code in the working directory
Your individual [nnn].html files have titles with absolute paths matching the source code in your working directory.
There are a few ways to ensure #2:
Way #1
Tell TeamCity to run all snapshot dependencies on the same agent.
Make sure your VCS setup is exactly the same. (For myself, I had excluded some folders in my "Send To SonarQube" equivalent, and that caused a different working directory)
Way #2
Override the Checkout Directory in your VCS setup for everything in the build chain to point to the same absolute directory.
(I haven't tried this, but it should work across agents since the agent name isn't in the directory path)
We have a maven project for which we have set up jenkins for build. The reporsitory has a large tools folder which i didn't want Jenkins to download.
I just want jenkins to download src folder and pom.xml file.
I added two reporsitory locations in Jenkins - only to learn that Single file checkouts are not possible
This forced me to use shell script option provided by Jenkins for checking out pom .xml . PFB the script outline.
svn checkout $pomUrl . --depth empty
svn update pom.xml
I did not find an option in my scm plugin of Jenkins to do an empty checkout
Checkout one file from Subversion
But POLL SCM of jenkins is only polling the src folder and builds are not triggered if i make some changes to pom.xml. Is there a way to ensure Polling of my pom.xml as well?
No. Jenkins will poll what it knows.
In your scenario:
Jenkins doesn't know about your pom.xml.
Jenkins doesn't work in single file checkouts anyways.
You will have to rearrange your structure, either move the tools folder outside of the main checkout (if it's so large that it's prohibitive, why do you have it in the root location?), or move the pom.xml into the src folder.
Edit:
Here is an idea. Haven't tried so don't know if that will work.
Keep your manual checkout and update of that pom like you currently do.
Setup another SVN Add module....
Enter the root location of SVN where your pom is, give it a non-conflicting folder name
Configure Repository depth for that module as Empty (if you don't see this option, you may need to upgrade your SVN plugin and/or Jenkins).
Click Advanced... section.
Configure Included Regions with the path to your src folder, and the pom only.
Something like:
/trunk/myapp/src/.*
/trunk/myapp/pom.xml