I have a Gerrit project without any dashboards defined. To make the differentiation between multiple projects on the same Gerrit server easier I would like to create a new dashboards for one of the projects.
The official documentation (at least as of v2.13.5-2456) assumes that the necessary branch where the dashboards are to be created already exist, which ist not the case on my installation. As such, the necessary steps for the first dashboard for a project are omitted there.
So the question is: What are the necessary steps to create a first dashboard for a project? Are there any pitfalls? If so, how can they be avoided?
The biggest problem is creating the new meta branch where the dashboards will be housed. For that you need to make sure the user has the following access rights for the reference refs/meta/dashboards/*:
CreateReference
Push
Now check out your project as usual with git clone ssh://<user>#<server>:29418/<path/to/project> (you may want to adjust the port as necessary). You will have the current master branch in your working directory. However, the dashboards branch only works if the only files in it are actual dashboard configurations.
To solve this you have to create a new orphan branch, which does not have any history or files in it. That can be achieved with git checkout --orphan dashboard_local.
On this branch you can create your dashboard configuration with the syntax as documented in the official manual. Commit this file and make sure that no files other than dashboard configurations are in this branch.
Now this branch needs to be pushed to the server. You can use the regular Gerrit syntax here: git push origin HEAD:refs/meta/dashboards/<group>. Using the <group> identifier you can group several dashboards together in the Gerrit Web-UI.
If you made no syntax errors your dashboard should now show up and new dashboards can be added to this existing branch.
Based on:
An explanation of emtpy (orphaned) branches in git
MediaWiki Help on Gerrit dashboard
Related
For one of our repositories we set "Custom CI configuration path" inside GitLab to a remote gitlab-ci.yml. We want to do this to prevent Developers to change the gitlab-ci.yml file (as protected files are available in EE Premium and up). But except this purpose, the Custom CI configuration path feature should work anyway for Merge Requests.
Being in repo
group1/repo1
we set
.gitlab-ci.yml#group1/repo1-ci
repo1-ci repository exists and ci works correctly when we push to configured branches etc.
For Merge Request functionality GitLab tells us:
Detached merge request pipeline #123 failed for ...
Project group1/repo1-ci not found or access denied!
We added the developers to repo1-ci repo as developers, to be able to read the files. It does not help. Anyway the expectation is, that it is not run with user permissions, so it should simply find the gitlab-ci.yml file.
Any ideas on this?
So our expectations were right an it seems that we have to add one important thing into our considerations:
If a user interacts in the GitLab UI with the Merge Request features and you are using "Custom CI configuration path" for your gitlab-ci.yml file, please ensure
this user needs at least read permissions to that remote file, even if you moved it to another repo on purpose (e.g. use enhanced file protection in PREMIUM/ULTIMATE or push/merge protect the branches for the Developer role)
the user got this permission change applied in a running session
The last part failed for our users, as it worked one day later. Seems that they just continued working from their open merge request page and GitLab checks the accessibility out of this session (using a cookie, token or something which was not updated with the the access to the remote repo/file)
It works!
I want to create branches for modifying the exisiting raml,
can any one help me how to create a new branch before modifying new changes directly to the master instead I can create feature branch or some other branch and
also I would like to know how to merge the new create branches to the existing branch
Example:
Master>UAT>STAGE>INT>DEV--- I need to create this as constant
and create feature whenever there are any changes requested
I need to create all this branches
but I have only one branch as Master
So if any of them want to update existing raml they should be able to create one feature branch and sent and approval request to merge any of the existing.
In Anypoint API Designer you can create branches from the master branch, by following the instructions at https://docs.mulesoft.com/design-center/design-branching, however note that the same page mentions explicitly that you can not merge changes from a branch back into master.
I am investigating adding an app.json file to my heroku pipeline to enable review apps.
Heroku offers the ability to generate one from your existing app setup, but I do not see any way to prevent it from automatically committing it to our repository's master branch.
I need to be able to see it before it gets committed to the master branch because we require at least two staff members to review all changes to the master branch (which triggers an automatic staging build) for SOC-2 security compliance.
Is there a way that I can see what it would generate without committing it to the repository?
I tried forking the repo and connecting the fork to it's own pipeline, but because it did not have any of our heroku add-ons or environment, it would not work for our production pipeline.
I am hesitant to just build the app.json file manually - it seems more prone to error. I would much prefer to get the automatically generated file and selectively remove items.
As a punchline to this story, I ended up investing enough time into the forked repository on it's own pipeline to demonstrate a POC
When you generate your app.json file, it should take you to a secondary screen that has the full app.json in plaintext at the bottom.
Why not open a PR with its contents in your project root. Once it's detected on the repository Heroku shouldn't ask you to regenerate it again.
We have multiple layers of our products split into different build configurations for continuous integration. For the sake of this question, let's just say we have a "Front-End CI" build, and a "API CI" build. The VCS roots are configured to pull in all branches, and triggered to run upon checkin, as should be expected for CI.
Now I have my User Acceptance project, where I use CloudFormation to dynamically spin up servers to which I deploy. I have snapshot dependencies set up for the CI builds mentioned above, and everything works as expected for the default branches on each of the VCS roots and dependencies. I expect that a feature branch for the front-end may not necessarily necessitate a branch from the default for the API, and the current way I have it setup accounts for that as well.
That's where I begin to have issues. If I have to branch both the front-end and API, I cannot get TeamCity to do what I want in this regard. My question is this: how do I tell Team City to run a UA build using branch "A" from the Front-End CI build config and branch "B" from the API CI build config, where "A" and "B" can be any arbitrary branch? Currently right now, all branches from both snapshots are shown when I look at the UA build config. Here's a good picture:
If I run api-branch, it will always use the default branch from the Front-end CI snapshot. Same for any branch on the front-end snapshot. I cannot seem to find a way to specify this in the configuration or when starting a build.
I'm up for just about anything to address this, including build configs that are just cloned off of each other to specify branches the way they're meant to, but I'm just not seeing how I can do that either. Thanks!
Create a teamcity template target which monitors both the front end and API repositories and can trigger on changes. This should be one target (and not 2 different targets). Parameterize the branch names so that actual targets have to give the branch name
I would suggest creating a mapping of the frontend:api branches in a datastore( file,db,nosql) . Then dynamically create teamcity targets (through REST API) for each new/modified combination and explicitly set the branch names. Once the targets are created they will automatically run whenver there are any changes.
I am developing a Magento site.
I have access to a local host and a remote host and would like to
somehow configure development and production environments. On the
remote host I restore the database data that was backed up on the
local host, but when I do so, I overwrite the host's base name and
this causes the site to be redirected to a nonexistent URL when
the page is loaded. How can I avoid this clash:
I want to be able to develop either (a) on http:// remotehost/foobardev
and back up my data to http:// remotehost/foobar or otherwise (b) develop
on http:// localhost/foobar and deploy on http:// remotehost/foobar . I
want to know how to transfer the database data back and forth without
overwriting the values found in Magento Admin Panel -> System
-> Configuration -> Web -> Unsecure Base URL / Secure Base URL
when I run mysql and use the mysql command source to reinstate
the database entries found on the development site onto the
production site.
So, I would like an easier way to restore the database contents without
overwriting the base url configured in magento admin panel as doing so
would cause a redirect to a nonexisting or wrong place on each page load
and thus render the system unusable.
Not exactly a SO type of question. Magento EE has staging built in and can merge your data as well. You have to understand that syncing data from dev to live is not easily possible without some serious sync framework that keeps track on state of every row and column and knows what data is new and what is old and solve syncing conflicts.
Here's your flow based on assumption that you are using CE and does not have data migration tools bundled.
setup live database and count that data will move only from live to dev and never from dev to live as you don't have data migrations. Every config you need to make and preserve in database level do it on live database (test them out in dev environment and then create in live)
make a shell script , fabric script whatever deployment script you are comfortable with that will export live db dump , deletes dev database if exists and create a new database and import live database to it, run a pre or post sql script that will change/delete config values that are environment dependant (like base_url, secure_base_url etc)
to avoid double data entry always create all attributes and config values that you need to preserve with magento setup scripts.
Same goes about code and here's a common setup scenario with live, stage and development environments
one master version control (preferably bare just to avoid that someone will change files there) repository based on clean magento versions tree
separate branches for each environment (live, stage, dev(n)) and a verified code flow from dev (where you develop and can have broken codebase state) to stage (where release candidate resides and is ready for testing and does not change) from stage to live (where your live code is in stable state)
every developer works on a checkout from dev branch and commits to it's own dev branch and then pushes changes to dev where they can be evaluated and decided if changes are mature enough for staging
stage is a place where release candidate lives and client can test (or automated tests) and diagnose if it's ready enough to be released, no one ever changes code here and code comes from dev branch
live is live and running version where no one ever changes any code directly . If tests are passed code can come here from stage only
so to visualise it better imagine your codebase residing in git.
myproject_magento_se (your project git repository on bitbucket.org or in github or wherever you can host)
--> master (branch with all clean magento versions from your current to latest)
--> dev (git checkout -b master (or by specific version from master)
--> stage (while on dev: git checkout -b stage)
--> live (while on stage: git checkout -b live)
and imagine your hosts setup like this:
www.mylivesite.com = git clone yourgitrepo; git checkout live;
stage.mylivesite.com = git clone yourgitrepo; git checkout stage;
dev.mylivesite.com = git clone yourgitrepo; git checkout dev;
For all this you better have deployment scripts that do switching and code and database lifting between environments with a push of the button.
Here's a few common actions that you need to perform daily with every software project
move/reset data from live to stage from live to dev (have obfuscation calls if needed to scramble or change client related data)
move code from dev to stage
move code from stage to live
reset/create any dev with live state (Data and code)
have fun :) and go through this thread as well https://superuser.com/questions/90301/sync-two-mysql-databases and all other you can find searching on SO in similar matter.