I am trying to connect sync gateway to couchbase server with following config.json file
{
"interface":":4984",
"adminInterface":":4985",
"log":["REST"],
"databases":{
"sync_gateway":{
"server":"http://localhost:8091",
"bucket":"sync_gateway",
"sync":`function(doc) {channel(doc.channels);}`,
"users": {
"GUEST": {
"disabled": false, "admin_channels": ["*"]
}
},
"shadow": {
"server": "http://localhost:8091",
"bucket": "copy"
}
}`enter code here`
}
}
but I am not able to do shadowing...showing following error
2016-06-30T17:54:57.013+05:30 WARNING: Database "sync_gateway": unable to connec
t to external bucket for shadowing: 502 Unable to connect to shadow bucket: No b
ucket named copy -- rest.(*ServerContext)._getOrAddDatabaseFromConfig() at serve
r_context.go:793
enter image description here
Related
I have a service that I want to proxy with Connect and followed the instructions on HashiCorp Learn portal.
This is my "hello" service:
{
"service": {
"name": "node",
"port": 3000,
"connect": {
"sidecar_service": {}
}
}
}
I then do a "consul reload" and create the proxy with
consul connect proxy -sidecar-for node &
When I create another service like this
consul connect proxy -service web -upstream node:9191
I can verify that I can reach my node service by calling the web service on port 9191 (curl localhost:9191). But when I define my web service in a json file as shown below, then register it (with consul reload) and want to connect to it, I have the following error:
curl: (7) Failed to connect to localhost port 9191: Connection refused
web.json
{
"service": {
"name": "web",
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "node",
"local_bind_port": 9191
}
]
}
}
}
}
}
Is there anything I missed?
I am following this tutorial https://learn.hashicorp.com/consul/getting-started/connect
at the point when I ran consul connect proxy -sidecar-for web it started throwing this error:
2020-07-26T14:30:18.243+0100 [ERROR] proxy.inbound: failed to dial: error="dial tcp 127.0.0.1:0: connect: can't assign requested address"
why this does not have port assigned in his demonstration ?
{
"service": {
"name": "web",
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "socat",
"local_bind_port": 9191
}
]
}
}
}
}
}
The video in the tutorial shows the forth line as:
"port": 8080,
The documentation is missing that line. Not that it will matter because nothing is listening on web service so the error will persist. You can safely ignore that. I suspect your issue is that the operation nc 127.0.0.1 9191 is failing. I address that below.
The full config should look like:
{
"service": {
"name": "web",
"port": 8080,
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "socat",
"local_bind_port": 9191
}
]
}
}
}
}
}
But, this isn't important for getting through this section of the lab. The instructions aren't clear but don't forget to restart web proxy consul connect proxy -sidecar-for web and start socat proxy consul connect proxy -sidecar-for socat
The last part is sorely missing from the instructions and the video.
I have an Next.js/Express/Apollo GraphQL app running fine on localhost.
I try to deploy it on Zeit Now, and the Next.js part works fine, but the GraphQL backend fails because /graphql route returns:
502: An error occurred with your deployment
Code: NO_STATUS_CODE_FROM_LAMBDA
My now.json looks like:
{
"version": 2,
"builds": [
{ "src": "next.config.js", "use": "#now/next" },
{ "src": "server/server.js", "use": "#now/node" }
],
"routes": [
{ "src": "/api/(.*)", "dest": "server/server.js" },
{ "src": "/graphql", "dest": "server/server.js" }
]
}
Suggestions?
Here’s a complete example of Next.js/Apollo GraphQL running both on Zeit Now (as serverless function/lambda) and Heroku (with an Express server):
https://github.com/tomsoderlund/nextjs-pwa-graphql-sql-boilerplate
I was getting that error until I found on a solution on the Wes Bos slack channel.
The following worked for me, but it's possible you could be getting that error for a different reason.
I'm not sure why it works.
You can see it working here
cd backend
Run npm install graphql-import
Update scripts in package.json:
"deploy": "prisma deploy --env-file variables.env&& npm run writeSchema",
"writeSchema": "node src/writeSchema.js"
Note: For non windows users make sure to place space before &&
Create src/writeSchema.js:
const fs = require('fs');
const { importSchema } = require('graphql-import');
const text = importSchema("src/generated/prisma.graphql");
fs.writeFileSync("src/schema_prep.graphql", text)
Update src/db.js:
const db = new Prisma({
typeDefs: __dirname + "/schema_prep.graphql",
...
});
Update src/createServer.js:
return new GraphQLServer({
typeDefs: __dirname + '/schema.graphql',
...
});
Update src/schema.graphql:
# import * from './schema_prep.graphql'
Create now.json
{
"version": 2,
"name": "Project Name",
"builds": [
{ "src": "src/index.js", "use": "#now/node-server" }
],
"routes": [
{ "src": "/.*", "dest": "src/index.js" }
],
"env": {
"SOME_VARIABLE": "xxx",
...
}
}
Run npm run deploy to initially create schema_prep.graphql.
Run now
Another reply said this:
You should not mix graphql imports and js/ts imports. The syntax on the graphql file will be interpreted by graphql-import and will be ignored by ncc (the compiler which reads the __dirname stuff and move the file to the correct directory etc)
In my example 'schema_prep.graphql' is already preprocessed with the imports from the generated graphql file.
Hopefully this helps.
My cluster Config file as follows
`
{
"name": "SampleCluster",
"clusterConfigurationVersion": "1.0.0",
"apiVersion": "01-2017",
"nodes":
[
{
"nodeName": "vm0",
"iPAddress": "here is my VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
],
"properties": {
"reliabilityLevel": "Bronze",
"diagnosticsStore":
{
"metadata": "Please replace the diagnostics file share with an actual file share accessible from all cluster machines.",
"dataDeletionAgeInDays": "7",
"storeType": "FileShare",
"IsEncrypted": "false",
"connectionstring": "c:\\ProgramData\\SF\\DiagnosticsStore"
},
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"isPrimary": true
}
],
"fabricSettings": [
{
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\ProgramData\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\ProgramData\\SF\\Log"
}
]
}
]
}
}
It is almost identical to standalone service fabric download demo file for untrusted cluster except my VPS ip. I enabled remote registry service.I ran the
\TestConfiguration.ps1 -ClusterConfigFilePath \ClusterConfig.Unsecure.MultiMachine.json but i got the following error.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <Another IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces
LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : False
FirewallAvailable :
RpcCheckPassed :
NoConflictingInstallations :
FabricInstallable :
DataDrivesAvailable :
Passed : False
Test Config failed with exception: System.InvalidOperationException: Best Practices Analyzer determined environment has
an issue. Please see additional BPA log output in DeploymentTraces folder.
at System.Management.Automation.MshCommandRuntime.ThrowTerminatingError(ErrorRecord errorRecord)
I don't understand the problem.VPSs are not locally connected. All are public IP.I don't know, this may b an issue. how do I make virtual LAN among these VPS?Can anyone give me some direction about this error?Anyone helps me is greatly appreciated.
Edit: I used VM term insted of VPS.
Finally I make this working. Actually all the nodes are in a network, i thought it wasn't. I enable file sharing. I try to access the shared file from the node where I ran configuration test to the all other nodes. I have to give the credentials of logins. And then it works like a charm.
I have added a Redis ElastiCache section to my s-resource-cf.json (a CloudFormation template), and selected its hostname as an output.
"Resources": {
...snip...
"Redis": {
"Type": "AWS::ElastiCache::CacheCluster",
"Properties": {
"AutoMinorVersionUpgrade": "true",
"AZMode": "single-az",
"CacheNodeType": "cache.t2.micro",
"Engine": "redis",
"EngineVersion": "2.8.24",
"NumCacheNodes": "1",
"PreferredAvailabilityZone": "eu-west-1a",
"PreferredMaintenanceWindow": "tue:00:30-tue:01:30",
"CacheSubnetGroupName": {
"Ref": "cachesubnetdefault"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"sgdefault",
"GroupId"
]
}
]
}
}
},
"Outputs": {
"IamRoleArnLambda": {
"Description": "ARN of the lambda IAM role",
"Value": {
"Fn::GetAtt": [
"IamRoleLambda",
"Arn"
]
}
},
"RedisEndpointAddress": {
"Description": "Redis server host",
"Value": {
"Fn::GetAtt": [
"Redis",
"Address"
]
}
}
}
I can get CloudFormation to output the Redis server host when running sls resources deploy, but how can I access that output from within a Lambda function?
There is nothing in this starter project template that refers to that IamRoleArnLambda, which came with the example project. According to the docs, templates are only usable for project configuration, they are not accessible from Lambda functions:
Templates & Variables are for Configuration Only
Templates and variables are used for configuration of the project only. This information is not usable in your lambda functions. To set variables which can be used by your lambda functions, use environment variables.
So, then how do I set an environment variable to the hostname of the ElastiCache server after it has been created?
You can set environment variables in the environment section of a function's s-function.json file. Furthermore, if you want to prevent those variables from being put into version control (for example, if your code will be posted to a public GitHub repo), you can put them in the appropriate files in your _meta/variables directory and then reference those from your s-function.json files. Just make sure you add a _meta line to your .gitignore file.
For example, in my latest project I needed to connect to a Redis Cloud server, but didn't want to commit the connection details to version control. I put variables into my _meta/variables/s-variables-[stage]-[region].json file, like so:
{
"redisUrl": "...",
"redisPort": "...",
"redisPass": "..."
}
…and referenced the connection settings variables in that function's s-function.json file:
"environment": {
"REDIS_URL": "${redisUrl}",
"REDIS_PORT": "${redisPort}",
"REDIS_PASS": "${redisPass}"
}
I then put this redis.js file in my functions/lib directory:
module.exports = () => {
const redis = require('redis')
const jsonify = require('redis-jsonify')
const redisOptions = {
host: process.env.REDIS_URL,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASS
}
return jsonify(redis.createClient(redisOptions))
}
Then, in any function that needed to connect to that Redis database, I imported redis.js:
redis = require('../lib/redis')()
(For more details on my Serverless/Redis setup and some of the challenges I faced in getting it to work, see this question I posted yesterday.)
update
CloudFormation usage has been streamlined somewhat since that comment was posted in the issue tracker. I have submitted a documentation update to http://docs.serverless.com/docs/templates-variables, and posted a shortened version of my configuration in a gist.
It is possible to refer to a CloudFormation output in a s-function.json Lambda configuration file, in order to make those outputs available as environment variables.
s-resource-cf.json output section:
"Outputs": {
"redisHost": {
"Description": "Redis host URI",
"Value": {
"Fn::GetAtt": [
"RedisCluster",
"RedisEndpoint.Address"
]
}
}
}
s-function.json environment section:
"environment": {
"REDIS_HOST": "${redisHost}"
},
Usage in a Lambda function:
exports.handler = function(event, context) {
console.log("Redis host: ", process.env.REDIS_HOST);
};
old answer
Looks like a solution was found / implemented in the Serverless issue tracker (link). To quote HyperBrain:
CF Output variables
To have your lambda access the CF output variables you have to give it the cloudformation:describeStacks access rights in the lambda IAM role.
The CF.loadVars() promise will add all CF output variables to the process'
environment as SERVERLESS_CF_OutVar name. It will add a few ms to the
startup time of your lambda.
Change your lambda handler as follows:
// Require Serverless ENV vars
var ServerlessHelpers = require('serverless-helpers-js');
ServerlessHelpers.loadEnv();
// Require Logic
var lib = require('../lib');
// Lambda Handler
module.exports.handler = function(event, context) {
ServerlessHelpers.CF.loadVars()
.then(function() {
lib.respond(event, function(error, response) {
return context.done(error, response);
});
})
};