How to debug or run locally GCP serverless application in nodejs? - debugging

As this link, the serverless offline is only supporting to AWS. And I want to know that is there any library similar to serverless offline which support GCP or is there any other way of running locally? If there is a way please provide a reference.

You have the Functions Framework, which allows you to spin up a local development server for quick Cloud Functions testing, but AFAIK, there is nothing for emulating GCP's API Gateway locally

Related

Local development, events and ruby on jets

I'm trying out ruby on jets since we'd like to reuse rails code but we are deploying on AWS lately.
We'd like to have a smooth offline development experience such as the one provided by Serverless.
Has anyone managed to test events such as S3 or SNS in RubyOnJets without deploying the lambda?
I'm not familiar with RybyOnJets but it seems to be using AWS. It might be possible to use localstack:
https://github.com/localstack/localstack
to simulate AWS services on local machine.

SAM CLI for CI/CD other than Cloud Formation

Is it possible to use SAM CLI (or any other tool known to mankind) to deploy a lambda function with defined triggers, memory and timeout limits set, etc. the way SAM CLI is able to do it using Cloud Formation (or even in a better way)?
Currently I'm using TravisCI to deploy my lambda functions, but that's really just a better zip uploader to AWS, as I can't define any triggers for the lambda function the way I can do it through SAM (Serverless Application Model).
I would look into leveraging AWS Code pipeline, Codebuild, Code deploy for you serverless functions CI/CD. Sam also has some awesome baked in tools for leveraging code deploy under the hood to enable things like weighted roll outs canary deploys etc.
https://github.com/aws-samples/aws-safe-lambda-deployments
https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/
For specifying things like memory, triggers, timeouts this would all be done in cloudformation template as you mentioned and this is best practice.
Since asking the question I came across diferrent useful tools to deploy configured Lambda functions:
serverless framework
All-in-one development & monitoring of auto-scaling apps on AWS Lambda
AWS CDK
Define cloud infrastructure using familiar programming languages

How to setup Application Insights for on-premise Service Fabric?

Is it possible to add application insights for web api that's hosted on the on-premise version of service fabric?
So far I have tried to add the application insights to my project and wondering where to send for monitoring. It was easy when app is also on cloud.
I believe there is no on-premise application insights service, so even if the web api is hosted on-premise over service fabric; one must use cloud version application insights service, is that correct? In that case can anyone let me know how to setup?
App Insights is only hosted in Azure. If you're looking for an on-premise solution, you're best off looking at using something like the ELK stack (Elastic Search, Logstash and Kabana).
Nonetheless, even though your cluster is hosted on-premise, using Asure App Insights is still very much a valid scenario (assuming your IT organisation is fine with it).
Assuming you're fine with Application Insights, I strongly recommend you have a look at App Insights Service Fabric. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics, as well as,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications
One thing however that the above won't solve is providing overall cluster health information - e.g. when/how often nodes go up/down, how much CPU/Memory and disk IO is consumed on individual nodes. For this you could try MS EventFlow or a custom windows service
There is no "on premise" application insights, but as long as your on premise service has access to send outbound data, you can use application insights on your site. You won't be able to use some features, like webtests, because application insights wont be able to make calls into your site.
Setup is the same as always, create an application insights resource in azure, and either configure it in visual studio, or manually set the instrumentation key in your applicationinsights.config (or via code) in your app.
If you need to configure outbound firewall rules or anything to let AI send data, that information is all here: https://learn.microsoft.com/en-us/azure/application-insights/app-insights-ip-addresses

what is Cloud foundry ? and how to integrate with amazon services

Hi i am new to understanding the concept and necessity of Cloud foundry?
what is cloud foundry is it a free server. what type of services it offering that are beneficial for my amazon cloud application?
how to Setup on EC2 instance
Comment with developer perspective because we have git Hub for source code versing control system. i found cloud foundry provides load balancing deployment directly feature ? what about my multiple project deployment like PHP, Java, Mobile and what about User management to restrict developer only update code to deploye?
Providers of Cloud Foundry (like Bluemix) give the developer the ability to push their application, and have the platform handle setting up the environment to run your application. Each of your projects can be deployed as a new application (multiple languages are supported). You can deploy one or many instances of each application - a load balancer is built in to the platform. Unlike EC2 (Infrastructure as a Service), Cloud Foundry is a Platform as a Service. Users are not managing VM's, but instead are more focused on their application.
Applications can be deployed into organizations and spaces to manage collaborative development. There are a lot of videos on youtube that demo this in action.
Cloud Foundry is an open source cloud platform as a service (PaaS) on which developers can build, deploy, run and scale applications.
Cloud platforms let anyone deploy network apps or services and make them available to the world in a few minutes. When an app becomes popular, the cloud scales it to handle more traffic, replacing build-out and migration efforts that once took months with a few keystrokes.

How do you run utility services on Heroku?

Heroku is fantastic for prototyping ideas and running simple web services, I often use it to run Python web services like Flask and Django and try out ideas. However I've always struggled to understand how you can use the infrastricture to run those amazingly powerful support or utility services every startup needs in its stack. 4 exmaples of services I can't live without and would recommend to any startup.
Jenkins
Statsd
Graphite
Graylog
How would you run these on Heroku? Would it be best just getting dedicated boxes (Rackspace, e.t.c) with these support services installed.
Has anyone one run utility deamons (services) on Heroku?
There are two basic options. The first is to find or create a Heroku addon to accomplish the task. For example, there are many hosted logging solutions you can use instead of Graylog; Rails on Fire or Travis can be used instead of Jenkins. If an appropriate addon doesn't exist, you can effectively make your own by just running the service on an AWS EC2 instance.
The other alternative is to push the service into being a 12factor application so that it can run on Heroku as well. For example, you could stub out whisper's filesystem calls so that they store in a backing service instead. This is often pretty painful and brittle, though, unless you can get your changes accepted by the upstream maintainers.
you could also use another free service in conjunction with it. OpenShift has a lot of Java related build services and tools that can be added.
I am using a mix of heroku, openshift, mongolab and my own web hosting. Throw in dropbox and box for some space...

Resources