I am running a command like the following.
serverless invoke local --function twilio_incoming_call
When I run locally in my code I plan to detect this and instead of looking for POST variables look for a MOCK file I'll be giving it.
I don't know how to detect if I'm running serverless with this local command however.
How do you do this?
I looked around on the serverless website and could find lots of info about running in local but not detecting if you were in local.
I found out the answer. process.env.IS_LOCAL will detect if you are running locally. Missed this on their website somehow...
If you're using AWS Lambda, it has some built-in environment variables. In the absence of those variables, then you can conclude that your function is running locally.
https://docs.aws.amazon.com/lambda/latest/dg/lambda-environment-variables.html
const isRunningLocally = !process.env.AWS_EXECUTION_ENV
This method works regardless of the framework you use whether you are using serverless, Apex UP, AWS SAM, etc.
You can also check what is in process.argv:
process.argv[1] will equal '/usr/local/bin/sls'
process.argv[2] will equal 'invoke'
process.argv[3] will equal 'local'
Related
So im building app based on Express and using Prisma ORM. What i need is to SSH to a server, open up express.js console and create new db entry using prisma. Something similar to python manage.py shell for Django or rails console for Rails. Is there a solution for this of any kind?
Like I pointed in the comment there is a way ( kind of ) to get access to a running express instance. If that's all you need follow:
How can I open a console to interact with Express app?
Express doesn't exactly have a feature like rails console which is a framework feature in that case.
That said, I question the long term implication of this approach. If you really just need to seed some data, write an "init" script, and call it after you ssh into a server or using some CI/CD approach. This is more re-usable, since you can even pass a json file to the script to load dynamic data.
Also, Prismajs has an official way to seed the data ( if that's what you need) that you can leverage:
https://www.prisma.io/docs/guides/database/seed-database
UPDATE:
If you are able to run to code on your machine and point the remote database, then you can use node --inspect to debug in a chrome console. Which should give you about the same effect as a rails REPL
https://medium.com/#tbernardes/debugging-nodejs-with-chrome-inspector-devtools-1cd2ef323b5e
I have currently dockerized my DBT solution and I launch it in AWS Fargate (triggered from Airflow). However, Fargate requires about 1 minute to start running (image pull + resource provisioning + etc.), which is great for long running executions (hours), but not for short ones (1-5 minutes).
I'm trying to run my docker container in AWS Lambda instead of in AWS Fargate for short executions, but I encountered several problems during this migration.
The one I cannot fix is related to the bellow message, at the time of running the dbt deps --profiles-dir . && dbt run -t my_target --profiles-dir . --select my_model
Running with dbt=0.21.0
Encountered an error:
[Errno 38] Function not implemented
It says there is no function implemented but I cannot see anywhere which is that function. As it appears at the time of installing dbt packages (redshift and dbt_utils), I tried to download them and include them in the docker image (set local paths in packages.yml), but nothing changed. Moreover, DBT writes no logs at this phase (I set the log-path to /tmp in the dbt_project.yml so that it can have write permissions within the Lambda), so I'm blind.
Digging into this problem, I've found that this can be related to multiprocessing issues within AWS Lamba (my docker image contains python scripts), as stated in https://github.com/dbt-labs/dbt-core/issues/2992. I run DBT from python using the subprocess library.
Since it may be a multiprocessing issue, I have also tried to set "threads": 1 in profiles.yml but it did not solve the problem.
Does anyone succeeded in deploying DBT in AWS Lambda?
I've recently been trying to do this, and the summary of what I've found is that it seems to be possible, but isn't worth it.
You can pretty easily build a Lambda Layer that includes dbt & the provider you want to use, but you'll also need to patch the multiprocessing behavior and invoke dbt.main from within the Lambda code. Once you've jumped through all those hops, you're left with a dbt instance that is limited to a relatively small upper bound on memory, a 15 minute maximum runtime, and is throttled to a single thread.
This discussion gives an rough example of what's needed to get it running in Lambda: https://github.com/dbt-labs/dbt-core/issues/2992#issuecomment-919288906
All that said, I'd love to put dbt on a Lambda and I hope dbt's multiprocessing will one day support it.
I'm extremely new at k6 + influxdb + grafana, and I was given a task related to execute certain K6 Scripts locally but save/pass the data over a remote InfluxDB instance.
As of now I'm having issues given that I'm not sure what I'm missing regarding the needed configurations in order to do this since everytime I try to run the script pointing at the InfluxDB instance I'm just getting an error everytime I run it:
The command that I'm executing is:
k6 run --out influxdb="https://my_influxdb_url/write" //sampleScript.js
But the original URL that was handed over to me was something like this:
https://my_influxdb_url/write?db=DB_NAME&u=USERNAME&p=PASSWORD
And when I execute the first mentioned script I'm getting the following error:
ERRO[000X] Couldn't write stats error="404 page not found\n" output=InfluxDB1
So I've tried creating K6_INFLUXDB_USERNAME and K6_INFLUXDB_PASSWORD as environment variables but I'm still getting the same error.
I'm not sure if I might be missing some .yaml file like a datasource in which I should fill those 3 values? (DB_NAME, USERNAME, PASSWORD)
Or maybe I'm just doing it all wrong and not calling the execution command properly for this scenario.
Another weird thing that I noticed is that OUTPUT is throwing InfluxDB1 instead of my actual InfluxDB url which I guess might be where my issue lies.
Any kind of tip would be greatly appreciated since the actual documentation that I've found so far is always run either on a Docker container instance of Grafana+InfluxDB or simply running it locally which is not my case :(
Thanks a lot in advance as always!!
I would like a variable to be shared among the various modules that I use for my cloud code.
For example, I was hoping I would be able to do the following:
In main.js, I would have the following:
Env = 'prod';
var Foo = require('cloud/foo.js').Foo;
Then in foo.js, I'd want to be able to access the value of Env
console.log("environment is: " + Env);
This does not work when deployed on Parse, but it does work if I run this in node.js.
Essentially, what I am looking for is a poor man's way to do dependency injection to allow me to easily test my cloud code in a local environment using node.js.
In the case above, Env would store the information that differs whether the cloud code executes in production (as a cloud function in Parse) or in a test (in node.js run locally).
[In the simple example above, I set Env to prod in main.js, and I'd set it to 'test' in my test script.]
Thanks for any insight.
Every time I run this command below, it runs on the default database, note the database I've selected:
Config::set('database.connections.mysql.database', 'somedatabasename');
Artisan::call('migrate');
Anyone know what this is not working?
You could implement that by using different environments. For example, one config for testing environment, another for local / staging / production. Could you elaborate on what you're actually trying to achieve and what's the context so we can answer in more depth?