My Heroku app uses the pandas module. I deployed it and put it into requirements.txt and no errors were raised. It does not work when deployed.
To test this, I did the following
I added a panda dataframe that had no relevance into a sample command, and found out that the ping did not print in the console.
if message.content == 's!ping':
# Checking Latency
df = pandas.read_html(requests.get('https://example.com').text)[2]
ping = str(round(bot.latency, 3))
print(ping)
One last thing to not
When i run my code on my computer the discord bot it works, but when I deploy it to heroku it does not
Thanks.
Related
I am currently trying to host a typescript/sequelize project in Google cloud build.
I am connecting through a unix socket and cloud sql proxy.
The app is deployed and a test running "sequelize.authenticate()" seems to be working.
Migrations to localhost seems to be working aswell.
I have written a cloud build trigger that does the following:
-builds a simple docker image
-pushes the simple docker image
-npm install
-downloads the cloud_sql_proxy
-initiates the cloud_sql_proxy
the next step would be to migrate a simple table to my gcloud database.
please check out my drawing for further details: https://excalidraw.com/#json=LnvpSjngbk7h1F0RzBgUP,HPwtVWgh-sFgrmvfU9JK0A
If i try to run "npx sequelize-cli db:migrate" gcloud gives the following message: [31mERROR:[39m connect ENOENT /cloudsql/xxxxxxx/.s.PGSQL.5432
but if i replace the command with npx sequelize-cli --version, it simply prints out the version and moves on with the rest of the trigger operations.
I am using dbx to deploy and launch jobs on ephemeral clusters on Databricks.
I have initialized the cicd-sample-project and connected to a fresh empty Databricks Free trial environment and everything works (this means, that I can successfully deploy the python package with this command
python -m dbx deploy cicd-sample-project-sample-etl --assets-only
and execute it through the python -m dbx launch cicd-sample-project-sample-etl --from-assets --trace
When I try to launch the same exact job on my company's Databricks environment the deploy command goes through. The only difference is that my company's Databricks environment connects to Azure through a VPN.
Therefore, I added some rules to my firewall
firewall_rules_img
firewall_rules_2_img
but when I give the dbx launch command I get the following error
error_node_img
and in the log this message appears
WARN MetastoreMonitor: Failed to connect to the metastore InternalMysqlMetastore(DbMetastoreConfig{host=consolidated-westeuropec2-prod-metastore-3.mysql.database.azure.com, port=3306, dbName=organization5367007285973203, user=[REDACTED]}). (timeSinceLastSuccess=0) org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException: java.sql.SQLTransientConnectionException: metastore-monitor - Connection is not available, request timed out after 15090ms. at org.skife.jdbi.v2.DBI.open(DBI.java:230)
I am not even trying to write on the metastore, I am just logging some data:
from cicd_sample_project.common import Task
class SampleETLTask(Task):
def launch(self):
self.logger.info("Launching sample ETL task")
self.logger.info("Sample ETL task finished!")
def entrypoint(): # pragma: no cover
task = SampleETLTask()
task.launch()
if __name__ == "__main__":
entrypoint()
Does someone encountered the same problem? Where you able to use Databricks-dbx with an Azure VPN?
Please let me know and thanks for your help.
PS: If needed I can provide the full log
In your case the egress traffic isn't configured correctly - it's not the DBX problem, but general Databricks networking problem. Just make sure that outgoing traffic is allowed to the ports and destinations described in the documentation.
When I deploy a Solana program to devnet it works fine.
However, when I try to deploy the same program to production I get the following error:
Error: Deploying program failed: Error processing Instruction 1: custom program error: 0x1
There was a problem deploying: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "" }.
The command I am using is:
solana -k admin_key.json -u mainnet-beta program deploy target/deploy/pixels.so
This command works fine if I swap mainnet-beta with devnet.
It's worth noting that I can deploy to production (and I have) using:
solana -k admin_key.json -u mainnet-beta deploy target/deploy/pixels.so
Does anyone understand why the discrepancy between devnet and mainnet here?
Here's a link to the currently deployed program on main net:
https://explorer.solana.com/address/JBAnZXrD67jvzkWGgZPVP3C6XB7Nd7s1Bj7LXvLjrPQA
This was deployed using solana [...] deploy (versus the modern way of solana [...] program deploy).
You can see an example of a program deployed the modern way to dev net here:
https://explorer.solana.com/address/6uCCuJaQSQYGx4NwpDtZRyxyUvDMUJaVG1L6CmowgSTx?cluster=devnet
Error 0x1 typically means that there isn't enough SOL in the payer key to cover the deployment. You'll need to check that you have SOL on those keys on mainnet to properly do the deployment.
We are constantly getting an error while starting our Beam Golang SDK pipeline (driver program) from a docker image which works when started from local / VM instance. We are using Dataflow runner for our pipeline and Kubernetes to deploy.
LOCAL SETUP:
We have GOOGLE_APPLICATION_CREDENTIALS variable set with service account for our GCP cluster. When running the job from local, job gets submitted to dataflow and completes successfully.
DOCKER SETUP:
Build image used is FROM golang:1.14-alpine. When we pack the same program with Dockerfile and try to run, it fails with error
User program exited: fork/exec /bin/worker: no such file or directory
On checking Stackdriver logs for more details, we see this:
Error syncing pod 00014c7112b5049966a4242e323b7850 ("dataflow-go-job-1-1611314272307727-
01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"),
skipping: failed to "StartContainer" for "sdk" with CrashLoopBackOff:
"back-off 2m40s restarting failed container=sdk pod=dataflow-go-job-1-
1611314272307727-01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"
Found reference to this error in Dataflow common errors doc, but it is too generic to figure out whats failing. After multiple retries, we were able to eliminate any permission / access related issues from pods. Not sure what else could be the problem here.
After multiple attempts, we decided to start the job manually from a new Debian 10 based VM instance and it worked. This brought to our notice that we are using alpine based golang image in Docker which may not have all the required dependencies installed to start the job.
On golang docker hub, we found a golang:1.14-buster where buster is codename for Debian 10. Using that for docker build helped us solve the issue. Self answering here to help anyone else facing the same issues.
I am trying to take notes while I am studying boto3 and I want to use Jupyter. The below code works in the interactive console but it fails with
EndpointConnectionError: Could not connect to the endpoint URL:
"https://ec2.Central.amazonaws.com/"
When I try it in Jupyter. I suspect that it is because of Jupyter not being able to find the config and credentials files but I am not sure, the message is not saying exactly that
import boto3
ec2=boto3.resource('ec2')
response = ec2.create_vpc(
CidrBlock='10.0.0.0/16',
)
print(response)
You could always provide your credentials to resource explicitly:
ec2=boto3.resource(
'ec2',
region_name='REGION_NAME',
aws_access_key_id='AWS_ACCESS_KEY_ID',
aws_secret_access_key='AWS_SECRET_ACCESS_KEY'
)
To get this working I had to create a system variable that holds the path to the configure file. The solution suggested by #scangetti is not secure.