Sendgrid/PHP/Heroku Not Working - heroku

I added the basic version of SendGrid to Heroku so we could send user-feedback emails from our website. The basic testing implementation I'm using is below:
<?php
/**** Takes posted content from 'contact.html' and sends us an email *****/
require 'sendgrid-php/SendGrid_loader.php';
$sendgrid = new SendGrid('username', 'pwd');
$mail = new SendGrid\Mail();
$mail->
addTo('matthewpolega#gmail.com')->
setFrom('matthewpolega#gmail.com')->
setSubject('another')->
setText('Hello World!')->
setHtml('<strong>Hello World!</strong>');
$sendgrid->
smtp->
send($mail);
header( 'Location: contact.html' );
?>
It works fine in localhost testing. However, it stalls when I test it online. Has anybody experienced a problem like this?

It sounds like you're having some issues with submodules on Heroku. There are two ways you can fix this:
1) Figure out what you did wrong by reading the heroku submodule docs. It's probably as simple as git submodule add path/to/sendgrid
2) Remove the .git directory in the SendGrid module and check it in to your repo:
$ cd ../path/to/sendgrid_lib
$ rm -rf .git/
$ cd ../root/project/dir
$ git add ../path/to/sendgrid_lib
$ git commit -m "Removed SendGrid submodule and added to repo"

Related

Trait not found by Laravel base class using Composer 2 autoloader?

I'm on a Laravel project using new-ish versions of PHP, Laravel and Composer 2, as of this writing. I added a new app/Traits/MyTrait.php file beside several existing trait files but unfortunately Composer absolutely will not detect the new file. I'm getting this error:
Trait 'App\Traits\MyTrait' not found
Similar to:
Laravel Custom Trait Not Found
Here is the general layout of the code:
# app/Traits/MyTrait.php:
<?php
namespace App\Traits;
trait MyTrait {
// ...
}
# app/Notifications/MyBaseClass.php:
<?php
namespace App\Notifications;
use App\Traits\MyTrait;
class MyBaseClass
{
use MyTrait;
// ...
}
# app/Notifications/MyChildClass.php
<?php
namespace App\Notifications;
class MyChildClass extends MyBaseClass
{
// ...
}
The weird thing is that this code runs fine in my local dev, but no matter what I try, it won't work when deployed to the server while running in a Docker container. I've tried everything I can think of like saving "optimize-autoloader": true in composer.json and running composer dump-autoload -o during deployment, but nothing fixes it:
https://getcomposer.org/doc/articles/autoloader-optimization.md
I'm concerned that this inheritance permutation may not have been tested properly by Composer or Laravel, so this may be a bug in the tools. If worse comes to worse, I'll try these (potentially destructive) workarounds:
Calling composer dump-autoload -o (greatly slows deployment, as this is a large project, and so far doesn't seem to fix it anyway)
Deleting via rm vendor/composer/autoload_classmap.php, rm vendor/composer/autoload_psr4.php and/or rm vendor/composer/autoload_namespaces.php (or similar) in the vendor folder before each deployment to force Composer to rebuild.
Deleting via rm -rf vendor
The sinister part about this is that we must have full confidence in our deploy process. We can't hack this in our server dev environments by manually deleting stuff like vendor and then have it fail in the production deploy because Composer tripped over stale data in its vendor folder. My gut feeling is that this is exactly what's happening, perhaps due to an upgrade from Composer 1 to Composer 2 or version change or stale cache files from work in recent months.
Even a verification like "this minimal sample project deployed to Docker works for us" would help to narrow this down thanks.
Edit: this is a useful resource on how the Composer autoloader works: https://jinoantony.com/blog/how-composer-autoloads-php-files
The problem turned out to be caused by the container/filesystem on AWS being case-sensitive, but my local dev environment on macOS being case-insensitive.
My original trait (kept secret) ended with URL in its name, but I was including its path as, and using it in the base class as, Url.
So this issue had nothing to do with traits, base classes or Composer. It also didn't require any modification of composer.json or the way we call it during deployment. But I think it's still best practice to have this in composer.json, I use it this way in local dev too currently (good/bad?):
"config": {
"optimize-autoloader": true
},
The real problems here (industry-wide) are:
Vague error messages
Lack of effort by code to drill down and find actual causes (by attempting to load as case-insensitive and returning a warning when found, for example)
Lack of action items for the user (have you checked the case? checked that the file exists? checked file permissions? etc etc, written into the error message itself, with perhaps a link to a support page/forum)
It wasn't convenient to ssh into the server (by design). So to troubleshoot, I temporarily committed this onto my branch:
# app/Http/Controllers/TestController.php
class TestController extends Controller
{
public function test()
{
return response('<pre>' .
'# /var/www/html/vendor/composer/autoload_classmap.php' . "\n" . file_get_contents('/var/www/html/vendor/composer/autoload_classmap.php') . "\n" .
'# /var/www/html/vendor/composer/autoload_files.php' . "\n" . file_get_contents('/var/www/html/vendor/composer/autoload_files.php') . "\n" .
'# /var/www/html/vendor/composer/autoload_namespaces.php' . "\n" . file_get_contents('/var/www/html/vendor/composer/autoload_namespaces.php') . "\n" .
'# /var/www/html/vendor/composer/autoload_psr4.php' . "\n" . file_get_contents('/var/www/html/vendor/composer/autoload_psr4.php') . "\n" .
'# /var/www/html/vendor/composer/autoload_real.php' . "\n" . file_get_contents('/var/www/html/vendor/composer/autoload_real.php') . "\n" .
'# /var/www/html/vendor/composer/autoload_static.php' . "\n" . file_get_contents('/var/www/html/vendor/composer/autoload_static.php') . "\n"
);
}
}
# routes/api.php
Route::get('/test', 'TestController#test');
Then deployed without merging in GitLab and compared the response to the error in AWS Cloudwatch, which is when the typo jumped out.
Then I removed the temporary commit with:
git reset --soft HEAD^
And force-pushed my branch with:
git push --force-with-lease
So was able to solve this without affecting our CI/CD setup or committing code permanently to the develop or master branches.
I've been doing this for a lot of years, and even suspected a case-sensitivity issue here, but sometimes we're just too close to the problem. If you're knee-deep in code and about to have an anxiety attack, it helps to have another set of eyes review your thought process with you from first principles.
I also need to figure out how to run my local Docker containers as case-sensitive as well, to match the server (since that's the whole point of using Docker containers in the first place).
I had the same problem and it was related to my file name. I had put it in lowercase at the beginning, that is: apiResponser.php. I added some changes and renamed my file to ApiResponser.php and sent it to production, but ... oh, oh!
I had the same problem.
The only way it worked for me was, do the git name replacement:
📦 git mv app/Traits/apiResponser.php app/Traits/ApiResponser.php
This way I was able to solve. I understand that you have solved it in
another way, however this may help another developer. 🙂

Using a private go module on Gitlab as import : "Unknown revision"

I cannot get this to work, even after checking other topics on stackoverflow. My project on gitlab.com/my_company/backend needs a module, found at gitlab.com/my_company/pkg/auth.
Locally, I've setup GOPRIVATE / git's configuration to make it work (and it works), though in Gitlab's CI Pipelines on a merge request, this fails.
Pipeline log / go.mod
I've added some debugging logs just to make sure everything was setup like I thought. Here's a failing pipeline's log :
$ git config --global url."ssh://git#gitlab.com/my_company/".insteadOf "https://gitlab.com/my_company/"
$ git config --global url."git#gitlab.com:".insteadOf "https://gitlab.com/"
$ git config -l | grep instead
url.ssh://git#gitlab.com/my_company/.insteadof=https://gitlab.com/my_company/
url.git#gitlab.com:.insteadof=https://gitlab.com/
$ env | grep GOPRIVATE
GOPRIVATE=gitlab.com/my_company
$ go mod download
go: gitlab.com/my_company/pkg/auth#v1.1.0: reading gitlab.com/my_company/pkg/auth/auth/go.mod at revision auth/v1.1.0: unknown revision auth/v1.1.0
One weird part of this log I've found is :
reading gitlab.com/my_company/pkg/auth/auth/go.mod - why is it repeating auth/auth? It actually happened once before locally, but it was because I wrote "github" instead of "gitlab" :)
The relevant go.mod line just in case :
require (
gitlab.com/my_company/pkg/auth v1.1.0 // indirect
)
Repository tags
Here are the tags setup on the repository gitlab.com/my_company/pkg :
$ git tag -l
auth/v1.0.0
auth/v1.1.0
cache/v1.0.0
cache/v1.0.1
$ git ls-remote --tags
From git#gitlab.my_company/pkg.git
9efcb02d5489adaac9d525dcb496d868d65e856a refs/tags/auth/v1.0.0
13730d4f61df978c6d690fd2678e2ed924808e0c refs/tags/auth/v1.1.0
2b8dff0ec1b737d975290720933180a9b591a1db refs/tags/cache/v1.0.0
9a3e598bbf83bea57b29d8a908b514861ae37b12 refs/tags/cache/v1.0.1
I'm not that familiar with Gitlab CI so I'm out of things to try. Any ideas?
Thank you!
Update: I finally got gitlab-runner installed so I could try running the yml directly, no luck. It still works locally (not a big surprise).
In you project should be file .gitlab-ci.yml and you can add GOPRIVATE variable to your CI and runner will use it for you project.
More details how to add env vars:
https://docs.gitlab.com/ee/ci/variables/#create-a-custom-cicd-variable-in-the-gitlab-ciyml-file

Error with JEST tests on TRAVIS-CI

Error
Error: /home/travis/build/ElectronicaGitHub/pictureAvenue/node_modules/jest-
cli/node_modules/jsdom/node_modules/contextify/build/
Release/contextify.node: invalid ELF header
That's happening when i'm trying to start JEST tests, it's just example test from JEST tutorial and looks like
jest.dontMock('../sum');
describe('sum', function() {
it('adds 1 + 2 to equal 3', function() {
var sum = require('../sum');
expect(sum(1, 2)).toBe(3);
});
});
Locally test with JEST runs fine.
Tried to start mocha tests on travis-ci and it's ok!
But my project on ReactJS and i they advise to use JEST for tests.
How to fix that problem?
Found your project on GH: https://github.com/ElectronicaGitHub/pictureAvenue
Almost guaranteed that this is because you checked in the node_modules folder. This should be downloaded by each consumer of the project, using npm install. Add a .gitignore file to the root of your project with the following content:
node_modules
Before you do this, you'll need to do:
rm -fr node_modules && \
git commit -a -m 'remove node_modules from source control' && \
git push origin master
Then add your .gitignore file and do another commit.
While you're at it, it looks like you're also checking in your sass-cache (.sass-cache\*). You'll want to do the same thing.
Final thoughts:
Typically source control is great for any artifact or code, it is not great for things that are often OS dependent (like node_modules) or host dependent (.sass-cache).
Hope that helps.

Using forked package import in Go

Suppose you have a repository at github.com/someone/repo and you fork it to github.com/you/repo. You want to use your fork instead of the main repo, so you do a
go get github.com/you/repo
Now all the import paths in this repo will be "broken", meaning, if there are multiple packages in the repository that reference each other via absolute URLs, they will reference the source, not the fork.
Is there a better way as cloning it manually into the right path?
git clone git#github.com:you/repo.git $GOPATH/src/github.com/someone/repo
If you are using go modules. You could use replace directive
The replace directive allows you to supply another import path that might
be another module located in VCS (GitHub or elsewhere), or on your
local filesystem with a relative or absolute file path. The new import
path from the replace directive is used without needing to update the
import paths in the actual source code.
So you could do below in your go.mod file
module some-project
go 1.12
require (
github.com/someone/repo v1.20.0
)
replace github.com/someone/repo => github.com/you/repo v3.2.1
where v3.2.1 is tag on your repo. Also can be done through CLI
go mod edit -replace="github.com/someone/repo#v0.0.0=github.com/you/repo#v1.1.1"
To handle pull requests
fork a repository github.com/someone/repo to github.com/you/repo
download original code: go get github.com/someone/repo
be there: cd "$(go env GOPATH)/src"/github.com/someone/repo
enable uploading to your fork: git remote add myfork https://github.com/you/repo.git
upload your changes to your repo: git push myfork
http://blog.campoy.cat/2014/03/github-and-go-forking-pull-requests-and.html
To use a package in your project
https://github.com/golang/go/wiki/PackageManagementTools
One way to solve it is that suggested by Ivan Rave and http://blog.campoy.cat/2014/03/github-and-go-forking-pull-requests-and.html -- the way of forking.
Another one is to workaround the golang behavior. When you go get, golang lays out your directories under same name as in the repository URI, and this is where the trouble begins.
If, instead, you issue your own git clone, you can clone your repository onto your filesystem on a path named after the original repository.
Assuming original repository is in github.com/awsome-org/tool and you fork it onto github.com/awesome-you/tool, you can:
cd $GOPATH
mkdir -p {src,bin,pkg}
mkdir -p src/github.com/awesome-org/
cd src/github.com/awesome-org/
git clone git#github.com:awesome-you/tool.git # OR: git clone https://github.com/awesome-you/tool.git
cd tool/
go get ./...
golang is perfectly happy to continue with this repository and doesn't actually care some upper directory has the name awesome-org while the git remote is awesome-you. All import for awesome-org are resovled via the directory you have just created, which is your local working set.
In more length, please see my blog post: Forking Golang repositories on GitHub and managing the import path
edit: fixed directory path
If your fork is only temporary (ie you intend that it be merged) then just do your development in situ, eg in $GOPATH/src/launchpad.net/goamz.
You then use the features of the version control system (eg git remote) to make the upstream repository your repository rather than the original one.
It makes it harder for other people to use your repository with go get but much easier for it to be integrated upstream.
In fact I have a repository for goamz at lp:~nick-craig-wood/goamz/goamz which I develop for in exactly that way. Maybe the author will merge it one day!
Here's a way to that works for everyone:
Use github to fork to "my/repo" (just an example):
go get github.com/my/repo
cd ~/go/src/github.com/my/repo
git branch enhancement
rm -rf .
go get github.com/golang/tools/cmd/gomvpkg/…
gomvpkg <<oldrepo>> ~/go/src/github.com/my/repo
git commit
Repeat each time when you make the code better:
git commit
git checkout enhancement
git cherry-pick <<commit_id>>
git checkout master
Why? This lets you have your repo that any go get works with. It also lets you maintain & enhance a branch that's good for a pull request. It doesn't bloat git with "vendor", it preserves history, and build tools can make sense of it.
Instead of cloning to a specific location, you can clone wherever you want.
Then, you can run a command like this, to have Go refer to the local version:
go mod edit -replace github.com/owner/repo=../repo
https://golang.org/cmd/go#hdr-Module_maintenance
The answer to this is that if you fork a repo with multiple packages you will need to rename all the relevant import paths. This is largely a good thing since you've forked all of those packages and the import paths should reflect this.
Use vendoring and submodules together
Fork the lib on github (go-mssqldb in this case)
Add a submodule which clones your fork into your vendor folder but has the path of the upstream repo
Update your import statements in your source code to point to the vendor folder, (not including the vendor/ prefix). E.g. vendor/bob/lib => import "bob/lib"
E.g.
cd ~/go/src/github.com/myproj
mygithubuser=timabell
upstreamgithubuser=denisenkom
librepo=go-mssqldb
git submodule add "git#github.com:$mygithubuser/$librepo" "vendor/$upstreamgithubuser/$librepo"
Why
This solves all the problems I've heard about and come across while trying to figure this out myself.
Internal package refs in the lib now work because the path is unchanged from upstream
A fresh checkout of your project works because the submodule system gets it from your fork at the right commit but in the upstream folder path
You don't have to know to manually hack the paths or mess with the go tooling.
More info
https://git-scm.com/book/en/v2/Git-Tools-Submodules
How do I fix the error message "use of an internal package not allowed" when go getting a golang package?
https://github.com/denisenkom/go-mssqldb/issues/406
https://github.com/golang/go/wiki/PackageManagementTools#go15vendorexperiment
The modern answer (go 1.15 and higher, at least).
go mod init github.com/theirs/repo
Make an explicit init arg that is the ORIGINAL package names. If you don't include the repo name, it will assume the one in gopath. But when you use go modules, they no longer care where they are on disk, or where git actually pulls dependencies from.
To automate this process, I wrote a small script. You can find more details on my blog to add a command like "gofork" to your bash.
function gofork() {
if [ $# -ne 2 ] || [ -z "$1" ] || [ -z "$2" ]; then
echo 'Usage: gofork yourFork originalModule'
echo 'Example: gofork github.com/YourName/go-contrib github.com/heirko/go-contrib'
return
fi
echo "Go get fork $1 and replace $2 in GOPATH: $GOPATH"
go get $1
go get $2
currentDir=$PWD
cd $GOPATH/src/$1
remote1=$(git config --get remote.origin.url)
cd $GOPATH/src/$2
remote2=$(git config --get remote.origin.url)
cd $currentDir
rm -rf $GOPATH/src/$2
mv $GOPATH/src/$1 $GOPATH/src/$2
cd $GOPATH/src/$2
git remote add their $remote2
echo Now in $GOPATH/src/$2 origin remote is $remote1
echo And in $GOPATH/src/$2 their remote is $remote2
cd $currentDir
}
export -f gofork
You can use command go get -f to get you a forked repo
in your Gopkg.toml file add these block below
[[constraint]]
name = "github.com/globalsign/mgo"
branch = "master"
source = "github.com/myfork/project2"
So it will use the forked project2 in place of github.com/globalsign/mgo

Access current git commit number from within Heroku app

I know the slug compiler removes the .git directory when creating a heroku slug, but is there any way to configure Heroku so that I can access the currently running git commit number from within my scripts?
I'd like to be able to have a small link on my sinatra app (run within Heroku) which says "running version e72fb274a0" (or something similar). How can I retrieve this, or force the slug compiler to add it to an environment variable?
PROGRESS:
I reckon the best way to do this is to make a custom buildpack which writes the git commit version number to the heroku slug before the .git directory is deleted.
I've tried to do this (see my fork of the ruby buildpack) but the line I've added – line 23 – doesn't seem to be doing the job. Heroku sees & uses the new buildpack, but doesn't seem to write the file to the slug.
Anyone have any idea why my custom buildpack isn't working as expected?
Thanks,
JP
A couple of options...
SOURCE_VERSION environment variable (build-time)
Since 1st April 2015, there's a SOURCE_VERSION environment variable available to builds running on Heroku. For git-pushed builds, this is the git commit SHA-1 of the source being built:
https://devcenter.heroku.com/changelog-items/630
(thanks to #srtech for pointing that out!)
An example of me using that variable in a build - if you look at the HTML served by the deployed app, you'll see the commit id is coming though in an HTML comment near the very bottom: https://gu-who.herokuapp.com/
/etc/heroku/dyno metadata file (run-time)
Heroku have beta functionality to write out a /etc/heroku/dyno metadata file onto your running dyno. If you email support you can probably get added to the beta. Here's a place where Heroku themselves are using it:
https://github.com/heroku/fix/blob/6c8ab7a/lib/heroku_dyno_metadata.rb
The contents look like this:
{
"dyno":{
"physical_id":"161bfad9-9e83-40b7-b385-78305db2f168",
"size":1,
"name":"run.7145"
},
"app":{
"id":null
},
"release":{
"id":50,
"commit":"2c3a0b24069af49b3de35b8e8c26765c1dba9ff0",
"description":null
}
}
..so release.commit is the field you're after. I used to use this method until the SOURCE_VERSION variable became available.
In 2018 this is what you want:
https://devcenter.heroku.com/articles/dyno-metadata
heroku labs:enable runtime-dyno-metadata -a <app name>
You can run a script before deploy that store this information (maybe on a YAML)
using these a = `ls` (note that is not ' "apostrophe" sign is ` "inverse accute" sign)
the a variable will have the result of this bash command,so you can do
git = `git log`
and then find the information you want it and store it.
So you will be able to retrieve it later.
Did this helped ?

Resources