ramifications of changing Postgraphile "disableDefaultMutations" from TRUE -> to FALSE? - graphql

We've got a prod app (backend on AWS) that uses Postgraphile. Whomever originally set up Postgraphile, set: disableDefaultMutations = true. NB: we set all the Postgraphile parameters in Typescript and deploy to AWS via CDK
However, now we want to get default mutations out of the box, AND, also keep our custom mutations already written.
Does anyone know if we can we set "disableDefaultMutations: false" without breaking things or causing unintended consequences ?

Related

Query Wildfly for a value and then use that in a CLI script

I have an Ansible script to update and maintain my WildFly installation. One of my tasks in this setup is managing the MySQL-driver and in order to perform an update on that driver, I have to first disable the application that uses that driver, before I can replace the it and set up all my datasources anew.
My CLI script starts with the following lines:
if (outcome == success) of /deployment=my-app-1.1.ear:read-resource
deployment disable my-app-1.1.ear
end-if
My problem is that I am here very depending on the actual name of the application and that name can change over time since I have my version information in there.
I tried the following:
set foo=`ls /deployment`
deployment disable $foo
It did not work since when I look at foo I see that it was not my-app-1.1.ear but ["my-app-1.1.ear"] -- so I feel that I might be going in the right direction, even though I have not got it right.

Define a Multi-Stage-Environment UI (Angular) in Kubernetes

A question regarding a multi-stage-environment in Kubernetes.
I got a dev,test,prod K8-Cluster, and I got environment variables that differ from stage to stage (like Backend-urls).
I was thinking of using the init-container for replacing the backend-urls per stage, so it's not hardcoded and you can change the urls, if something changes.
Is this an anti-pattern or would you just pack the backends together with the frontend (which is not really possible because we sometimes got more than one different backend-url)
you should use configmaps to set the environment variables
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
example for angular:
Configmaps - Angular

Why would I suddenly get 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors?

We've been using Kerberos auth with several (older) Cloudera instances without a problem but now are now getting 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors. We've been modifying code to add functionality but AFAIK nobody has touched either the authentication code or the cluster configuration.
(I can't rule it out - and clearly SOMETHING has changed.)
I've set up a simple unit test and verified this behavior. At the command line I can execute 'kinit -kt user.keytab user' and get the corresponding Kerberos tickets. That verifies the correct configuration and keytab file.
However my standalone app fails with the error mentioned.
UPDATE
As I edit this I've been running the test in the debugger so I can track down exactly where the test is failing and it seems to be succeed when run in the debugger!!! Obviously there's something different in the environments, not some weird heisenbug that is only triggered when nobody is looking.
I'll update this if I find the cause. Does anyone else have any ideas?
Auth_to_local has to have at least one rule.
Make sure you have “DEFAULT” rule at the very end of auth_to_local.
If none rules before match, at least DEAFULT rule would kick in.

Possible to do a "dry run" validation of files?

Before creating an object in Kubernetes (Service, ReplicationController, etc.), I'd like to test that the JSON or YAML specification of the object is valid. But I don't want to actually create the object.
Is there some to do a "dry run" that would be equivalent to running kubectl create --validate=true -f file.json, but would just let me know that it passes validation, and not actually create it?
Ideally, it would be great if I could do this via API, and not require the use of kubectl. But I could make it work if it required me to use kubectl.
Thanks.
This works for me (kubernetes 1.7 and 1.9):
kubectl apply --validate=true --dry-run=client --filename=file.yaml
Some kubectl commands support a --dry-run flag (like kubectl run, kubectl expose, and kubectl rolling-update).
There is an issue open to add the --dry-run flag to more commands.
There is a tool called kubeval which validates configs against the expected schema, and does not require connection to a cluster to operate, making it a good choice for applications such as CI.
The use of --dry-run and --validate only seem to partially solve the issue.
client-side validation is not exhaustive. it primarily ensures the
fields names and types in the yaml file are valid. full validation is
always done by the server, and can always impose additional
restrictions/constraints over client-side validation.
Source - kubectl --validate flag pass when yaml file is wrong #64830
Given this you cannot do a full set of validations except to hand it off completely to the server for vetting.

SCM management of AppFabric Cache Cluster

I'm working on building out a standard set of configurations for our cache clusters within App Fabric. My goal is to have a repeatable cache settings configuration when we load up a new environment (so server names are different, number of hosts, and other environmental factors).
My initial pass was to utilize the XML available from Export-CacheClusterConfig and simply change server names and size attributes in the <hosts> section, but I'm not sure what else is automatically registered with those values (the hostId parameter, for example).
My next approach that I've considered is a PowerShell script to simply build up the various caches with the correct parameters passed in that would simply run as a post-deploy step.
Anyone else have experience with repeatable AppFabric cache cluster deployments?
After trying both, the more successful option seems to be a combination of two factors. Management of the Cache Cluster (host information) is primarily an operations concern and is managed best by the operations team (i.e. those guys that read Server Fault). Since this information is stored in the configuration as well (and would require an XML file obtained from Export-CacheClusterConfig for each environment) it's best left to the operations team on how they want to manage it. Importing the wrong file (with the incorrect host information) has led to a number of issues.
So, we're left with PowerShell scripts. Here's a sample that I have. It could be cleaned up (check for Cache existence first) but you get the general idea. It's also much easier to store in source control (as it's just one file).
New-Cache -CacheName CRMTickets -Eviction None -Expirable false -NotificationsEnabled true
New-Cache -CacheName ConsultantCache -Eviction Lru -Expirable true -TimeToLive 60
New-Cache -CacheName WorkitemCache -Eviction None -Expirable true -TimeToLive 60

Resources