Generating a model referencing self column name error - phoenix-framework

Given (from scratch )
mix phoenix.new sandpit
mix ecto.create
mix phoenix.gen.json Part parts name part_of:references:parts
mix ecto.migrate
and editing the route so
scope "/api", Sandpit do
pipe_through :api
resources "/parts", PartController, except: [:new, :edit]
end
then running the server and going to
http://localhost:4000/api/parts
I get the errors
info] Running Sandpit.Endpoint with Cowboy using http on port 4000
13 Jul 23:11:37 - info: compiled 5 files into 2 files, copied 3 in 744ms
[info] GET /api/parts
[debug] Processing by Sandpit.PartController.index/2
Parameters: %{}
Pipelines: [:api]
[debug] SELECT p0."id", p0."name", p0."part_of_id", p0."inserted_at", p0."updated_at" FROM "parts" AS p0 [] ERROR query=78.0ms
[info] Sent 500 in 140ms
[error] #PID<0.318.0> running Sandpit.Endpoint terminated
Server: localhost:4000 (http)
Request: GET /api/parts
** (exit) an exception was raised:
** (Postgrex.Error) ERROR (undefined_column): column p0.part_of_id does not exist
(ecto) lib/ecto/adapters/sql.ex:185: Ecto.Adapters.SQL.query!/5
is this a bug with the generator? it seems to want to name things with _id either it seems like a hidden "rule" or it failed to add in _id. If it doesn't auto add _id, I'd think either it would error doing the generation, or use the names as given without crashing?
Trying with
mix phoenix.gen.json Thing things name thing_of_id:references:things
and migrating causes no errors
or is something else going on?

Known issue, currently under debate on what the best solution is :-
https://github.com/phoenixframework/phoenix/issues/1605
Current solution is to remember to suffix references with _id

Related

Global variable that count number of errors

Im new in Ansible. Im using playbooks/roles in Openstack and im searching for a Global variable or similar that count the number of errors in the playbook or at least that an error ocurred in some moment.
This is why (let say that this variable is called GLOB):
In Task:
---
# tasks file
- name: testing block
block:
# List of tests to run in this test suite:
- include: ../tests/DoThings.yml #API calls, http 50x possible errors: YES
always: #Final task always executed
- include: ../tests/Clear.yml #API calls, http 50x possible errors: YES
- include: ../tests/Report.yml #NOT API calls, http 50x possible errors: NO, just log checker.
So if some error ocurred in DoThings.yml i want to clear all the thing with Clear.yml and AFTER execute Report.yml, inside this i will check if GLOBAL var has 1 failed (one or more failed). This is because a failed can be
A "50x HTTP", that is impossible to predict, you have to try again (this is important to detect for me and differentiate from other errors)
A normal error for code mistake or similar, easy to fix (not important in this case)

Deploying on Netlify throws an error with my GraphQL/Gatsby/Contentful query, demands needless query parameter

At first I was getting this error on my local build server, but I managed to fix it there... the query is still the same, but gatsby isn't throwing any errors with the query. But every time I try to deploy on Netlify it fails with the following message:
toFormat seems to be empty, we need a fileExtension to set it.
1 | fragment GatsbyContentfulFluid_tracedSVG on ContentfulFluid {
> 2 | tracedSVG
| ^
3 | aspectRatio
4 | src
5 | srcSet
6 | sizes
7 | }
failed during stage 'building site': Build script returned non-zero exit code: 1
8 |
9 | query optbuildreposrccomponentsshopProductsJs2136335468 {
10 | products: allContentfulProduct {
11 | edges {
12 | node {
Shutting down logging, 22 messages pending
File path: /opt/build/repo/src/components/shop/Products.js
Plugin: none
This is the same error I was getting locally and I have no idea why it is occurring. There should be no reason that toFormat is a required parameter. This is using the standard gatsby-source-contentful plugin API request which has always served the image without issue in the past. If I change the request to 'fixed' instead of 'fluid' the problem goes away, but I need fluid images for this part of the site.
I emailed the Netlify staff a few days ago, but am yet to receive a reply. Any help would be greatly appreciated.
For Those who are facing the same issue I came up with a simple solution.
Remove from all your file places that you used this extension _tracedSVG.
eg.
GatsbyContentfulFixed_tracedSVG
to
GatsbyContentfulFixed
Stop your gatsby server and use the follow command:
gatsby clean && gatsby develop
Commit and push your changes (in case you are using Github)
On Netlify find the option: Clear cache and deploy site
It should fix your Deployment on Netlify as well errors on your console :)
Two suggestions:
Local: Double check your content for any image references that do not append a suffix of .png or .jpg
Netlify: Clear cache and deploy site

Sonarqube : The 'report' parameter is missing

I am using MSBuild. I have Java 8 installed.
I am running the following commands:
SonarQube.Scanner.MSBuild.exe begin /k:"ABC" /d:sonar.host.url="http://localhost:9000" /d:sonar.login="8b839xxxxxxxxxxxxxxxxxxxxxxx6b00125bf92" /d:sonar.verbose=true
"C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\msbuild.exe" /t:rebuild
SonarQube.Scanner.MSBuild.exe end /d:sonar.login="8b839xxxxxxxxxxxxxxxxxxxxxxx6b00125bf92"
The last step fails:
ERROR: Error during SonarQube Scanner execution
ERROR: The 'report' parameter is missing
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging.
The SonarQube Scanner did not complete successfully
12:53:21.909 Creating a summary markdown file...
12:53:21.918 Post-processing failed. Exit code: 1
The MSBuild version is greater than 14.
Java 8 is properly installed. Documentation indicates that Java 8 is adequate.
Any idea on what could be wrong?
Where do I add the -X switch? I tried on all 3 statements
Update :I installed Java SDK 9. Still same issue.
Update :With verbose logging and using /n naming parameter:
INFO: Analysis report generated in 992ms, dir size=4 MB
INFO: Analysis reports compressed in 549ms, zip size=1 MB
INFO: Analysis report generated in C:\ABC\.sonarqube\out\.sonar\scanner-report
DEBUG: Upload report
DEBUG: POST 400 http://localhost:9000/api/ce/submit?projectKey=ABC | time=1023ms
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 54.833s
INFO: Final Memory: 51M/170M
INFO: ------------------------------------------------------------------------
DEBUG: Execution getVersion
DEBUG: Execution stop
ERROR: Error during SonarQube Scanner execution
ERROR: The 'report' parameter is missing
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging.
Process returned exit code 1
The SonarQube Scanner did not complete successfully
Creating a summary markdown file...
Post-processing failed. Exit code: 1
I've struggled the same problem with SonarQube and I've finally found a solution:
You need to restart sonar service after using evaluation token.
Please note this isn't the answer, however I feel this feedback is valuable to getting this question answered.
I can reproduce this issue in POSTMan with a POST request to:
http://localhost:9000/api/ce/submit?projectKey=myProjectKey
This returns
{
"errors": [
{
"msg": "The 'report' parameter is missing"
}
]
}
You can get a similar error by removing the projectKey query parameter. I tried adding a report query parameter and received the same error:
http://localhost:9000/api/ce/submit?projectKey=brian3016&report=report
Given this, I feel there is a problem with their code. It should have included a report parameter when creating the POST request, but it failed to do so.
Verbose output seems to have changed from using the -X switch to /d:sonar.verbose=true. E.G.
SonarScanner.MSBuild.exe begin /k:"myProjectKey" /d:sonar.host.url="http://localhost:9000" /d:sonar.login="myLogin" /d:sonar.verbose=true
Note the verbose logging didn't give me any valuable insight.
(Also note that the documentation currently says to use SonarQube.Scanner.MSBuild.exe, but the verbose logger told me to switch to SonarScanner.MSBuild.exe)
SO...how we we report this issue to someone that can fix it? Their documentation says to go to Stackoverflow. So here we are.
I thought it may have been an issue with a project. So I created a new project with nothing other than the startup template Console Application. Same error.
In my case SonarQube 7.9.1 (deployed with Helm to Kubernetes cluster) was missing temp directory /opt/sonarqube/temp/tc/work/Tomcat/localhost/ROOT after Helm rollback. No idea what happened to it.
Logfile /opt/sonarqube/logs/web.log inside SonarQube pod had this error:
2021.02.02 06:57:03 WARN web[AXdZ6l6MParQCncJACv3][o.s.s.w.ServletRequest] Can't read file part for parameter report
java.io.IOException: The temporary upload location [/opt/sonarqube/temp/tc/work/Tomcat/localhost/ROOT] is not valid
The fix was to exec into pod and create the missing directory. Would like to know the reason though...
The issue is with the sonar service starting up.
First try to stop the SonarStart.bat by using Ctrl+c, and then try to open localhost:9000 ( or whichever port you configured sonar server).
If it is still opening then go to task manager and search for wrapper.exe service and stop the service. If no service is found then go to:
Task manager>Details> and stop all java.exe process.
Note: If you running many Java applications, right-click the java.exe and choose goto service, and stop only those java.exe that belongs to AppX deployment.services
Now start sonarstart.bat as administrator..
today i face the same error when using jenkins to scanner the code.
get the error when POST /api/ce/submit and get 400 code by add the sonar.verbose=true
i use the below step to check reason
first to restart the sonarqube => failed
check the report file size by using "du -sh" get 108m and DB server support 1G => failed
login the sonar-qube server and check the access.log, web.log and another log, finally find the error reason " Processing of multipart/form-data request failed. No space left on device", so i check the server by command "df -h", some devices are used 100% => so i remove some no-using file and fix it!!!
check if you have enough memory
ex: free -m
In my case I had to upgrade memory.

Tsung crashes with ** Reason for termination == ** {{badmatch,false}

I am using tsung ,compiled with openssl and erlang for sending queries.
In the tsung_controller.log I am getting this error :
** Reason for termination ==
** {{badmatch,false},
[{ts_config_server,handle_call,3,
[{file,"src/tsung_controller/ts_config_server.erl"},
{line,305}]},
{gen_server,try_handle_call,4,[{file,"gen_server.erl"},{line,628}]},
{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,660}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,238}]}]}
And:
=ERROR REPORT==== 17-Aug-2015::01:07:42 ===
** Generic server ts_config_server terminating
** Last message in was {get_client_config,static,"*****commented this **"}
** When Server state == {state,
{config,undefined,1200,5,none,text,undefined,
I have verified that the setup using the information from user manual
http://tsung.erlang-projects.org/user_manual/faq.html and getting OK.
so setup looks fine to me .But I am not able to figure out the exact reason for crash.
At some forum I have seen the answer that the query file mentioned in the .xml file could be the reason.But I have tried with absolute and relative path ,unfortunately it does not seem to work.
according to this message from 10 years ago in their mailing list
and my experience today, you are probably using a Fully Qualified Host Name instead of a short hostname .e.g. "mymachine.domain.tld" instead of "mymachine" as a value for the host attribute in the client tag, as the documentation points on in item 10.1

AWS Responses from [i-2a7fe91f] were received, but the commands failed

I have a 64 bit Tomcat 7 server on AWS with the default settings. I use Elastic Beanstalk to manage my instances. Sometimes when I deploy a new version, it doesn't work and shows me an error:
Responses from [i-2a7fe91f] were received, but the commands failed.
The thing is it happens half of the times, not all the times. When I get this error, I terminate the environment and create a new one with the same WAR file and it works fine! However, I was wondering if anyone knows what is really happening.
Here's a part of log file that I think is relevant:
2013-05-23 17:12:02,555 [INFO] (20168 MainThread) [command.py-122] [root command execute] Executing command: Infra-WriteApplication2 - AWSEBAutoScalingGroup
2013-05-23 17:12:11,401 [INFO] (20168 MainThread) [command.py-130] [root command execute] Command returned: (code: 1, stdout: Error occurred during build:
, stderr: None)
2013-05-23 17:12:11,432 [DEBUG] (20168 MainThread) [commandWrapper.py-60] [root commandWrapper main] Command result: {'status': 'FAILURE', 'results': [{'status': 'FAILURE', 'config_set': u'Infra-WriteApplication2', 'returncode': 1, 'events': [], 'msg': 'Error occurred during build: \n'}], 'api_version': '1.0'}
I encountered the same error message. In my case one of the commands in the .config file did not execute. There was no problem with the command itself, it turned out that I was missing a library which prevented the command from executing fully. The problem was resolved when I made the necessary changes to the requirements file.
What helped in my case was looking through the log file and locating the culprit.
My problem was that the war file could not be loaded due to the t1.micro RAM limit. However, the error was not descriptive at all.

Resources