github graphql making first commit after repo creation - graphql

Let's say I created a new github repository with
mutation AddRepository($name: String!) {
createRepository(
input: {
name: $name
visibility: PRIVATE
}) {
repository {
name
nameWithOwner
}
}
}
Now I want to make a commit and push to it. The makeCommitOnBranch mutation requires an existing oid for a parent commit, and an existing branch. Neither of these exist because the repo is brand new. How can I create my first commit using graphql?
I know in the v3 github API there is an auto-init field available when creating a new repo, but I am hoping to do it all with graphql and not combine the graphql and REST apis in my app.

Related

Gatsby GraphiQL did't see images from Strapi 4

I have a problem with Strapi and Gatsby GraphiQL. I was deploy Strapi on Heroku, create a content type and try to fetch some data, but Gatsby GraphiQL didn't see images. Why it's happening? Where is my mistake?
{
resolve: "gatsby-source-strapi",
options: {
apiURL: "https://thawing-beyond-49749.herokuapp.com",
collectionTypes: [
"api/main-page-slides",
],
queryLimit: 1000,
},
},
gatsby-source-strapi is only compatible with v3, as you can see in the repository:
⚠️ This version of gatsby-source-strapi is only compatible with Strapi
v3 at the moment. We are currently working on a v4 compatible version.
That said, in the meantime, your only chance is to roll back to v3 and wait for a compatible plugin version, otherwise you won't be able to fetch the data properly.
you can simply get images data in strapi v4+ by following graphql query =>>
images{
data{
id
attributes{
url
}
}
}enter image description here

GraphQL requesting fields that don't exist without error

I'm trying to handle backwards compatibility with my GraphQL API.
We have on-premise servers that get periodically updated based off of when they connect to the internet. We have a Mobile app that talks to the on-premise server.
Problem
We get into an issue where the Mobile app is up to date and the on-premise server isn't. When a change in the Schema occurs, it causes issues.
Example
Product version 1
type Product {
name: String
}
Product version 2
type Product {
name: String
account: String
}
New version of mobile app asks for:
product(id: "12345") {
name
account
}
Because account is not valid in version 1, I get the error:
"Cannot query field \"account\" on type \"Product\"."
Does anyone know how I can avoid this issue so I don't recieve this particular error. I'm totally fine with account coming back with Null or just some other plan of attack for updating Schema's. But having it completely blow up with no response is not good
Your question did not specify what you're actually using on the backend. But it should be possible to customize the validation rules a GraphQL service uses in any implementation based on the JavaScript reference implementation. Here's how you do it in GraphQL.js:
const { execute, parse, specifiedRules, validate } = require('graphql')
const validationRules = specifiedRules.filter(rule => rule.name !== 'FieldsOnCorrectType')
const document = parse(someQuery)
const errors = validate(schema, document, validationRules)
const data = await execute({ schema, document })
By omitting the FieldsOnCorrectType rule, you won't get any errors and unrecognized fields will simply be left off the response.
But you really shouldn't do that.
Modifying the validation rules will result in spec-breaking changes to your server, which can cause issues with client libraries and other tools you use.
This issue really boils down to your deployment process. You should not push new versions of the client that depend on a newer version of the server API until that version is deployed to the server. Period. That would hold true regardless of whether you're using GraphQL, REST, SOAP, etc.

Update a complete JSON config file from spring cloud config

I am currently setting up a spring cloud config server to deliver some JSON config files from a specific repository, using RabbitMQ to propagate change events to the clients.
Therefore I added a REST endpoint, which delivers all config files, based on a passed branch name:
#RestController
public class RPConfigsEndpoint {
#Autowired
private JGitEnvironmentRepository repository;
private File[] files;
#RequestMapping(value = "/myConfigs")
public File[] getList(#RequestParam(defaultValue = "master") String branch) {
//load/refresh the branch
repository.refresh(branch);
try {
FileRepositoryBuilder builder = new FileRepositoryBuilder();
Repository repo = builder.setGitDir(repository.getBasedir()).readEnvironment().findGitDir().build();
//only return JSON files
files = repo.getDirectory().listFiles((file, s) -> {
return s.toLowerCase().endsWith(".json");
});
} catch (IOException e) {
e.printStackTrace();
}
return files;
}
}
As expected, I get all the files as plain text files... so far so good.
Now my question:
If I modify one of these files and trigger the '/monitor' endpoint on the server, it generates a RefreshEvent as expected:
.c.s.e.MultipleJGitEnvironmentRepository : Fetched for remote master and found 1 updates
o.s.cloud.bus.event.RefreshListener : Received remote refresh request. Keys refreshed []
This event is sent, as the server notices that the commit ID has changed.
The delta of refreshed keys is empty, as it only looks for environment properties (in .yml or .properties files).
Is there a way to reload the whole config file on the client side as it would be done with single properties? Do I therefore need to adapt the change notification?
With my current approach, I would have to reload the whole branch (40 files) instead of reloading only the modified file...
I found a similar question from 2016 on github, without finding the answer.
Thanks in advance for any hint.
Update
repository.findOne(...) and repository.refresh(...) are not thread safe, which must be garanteed in cloud environment, as the service can be contacted by different apps at the same time.
Possible solutions:
'synchronize' the concerned method call(s)
avoid listing up all the files and request single plain text files, as it works out of the box.

Remote repository on Artifactory can not download artifact when it's not in the cache

I've created a VCS repository with the name yarn-test which is pointing to github. The main goal is to use this as remote repository to releases on github.
The following URL allows us to download a release:
https://repo-url/artifactory/api/vcs/downloadRelease/yarn-test/yarnpkg/yarn/v0.23.4?ext=tar.gz
All fine. This release is downloaded and in our cache of the yarn-test registry. I can download the release from the cache using:
https://repo-url/artifactory/yarn-test/yarnpkg/yarn/releases/v0.23.4/yarn-v0.23.4.tar.gz
This seems to be good for us because we use a plugin which expects the URL of artifactory to be in a format of:
https://repo-url/artifactory/xx/xx/v0.23.4/yarn-v0.23.4.tar.gz
So when our release is in the cache of our repository it works fine. But when we upgrade the yarn release in our plugin configuration it's searching in the cache for a new version (for example v1.3.2).
It's searching for:
https://repo-url/artifactory/yarn-test/yarnpkg/yarn/releases/v1.3.2/yarn-v1.3.2.tar.gz
The URL format is good, but the v1.3.2 version is not in our cache which is normal. But here pops up our issue. We would expect it would 'translate' this to the layout of our real remote repository. But this seems not to work.
We just receive a 404 error.
Why is our this not working? We can get a release from the cache but when the release does not exist our Artifactory repository is not able to download it from github because the layout is different?
Changes on our layout's do not seem to have any impact? (we really delete and recreate the remote repo with new layouts)
We're using this example as inspiration:
For example, the remote repository http://download.java.net/maven/1
stores its artifacts according to the Maven 1 convention. You can
configure the cache of this repository to use the Maven 2 layout, but
set the Remote Layout Mapping to Maven 1. This way, the repository
cache handles Maven 2 requests and artifact storage, while outgoing
requests to the remote repository are translated to the Maven 1
convention.
source.
When accessing a VCS repository through Artifactory, the repository URL must be prefixed with api/vcs in the path.
For example, if you are using Artifactory standalone or as a local service, you would access your VCS repositories using the following URL:
http://localhost:8081/artifactory/api/vcs/<repository key>
Assuming your repository name is yarn-test, you should use the following request in order to get the v1.3.2 tag:
GET https://repo-url/artifactory/api/vcs/downloadTag/yarn-test/yarnpkg/yarn/v1.3.2
The documentation for the REST API for downloading a specific tag can be found here. For more info about VCS repositories please take a look at the documentation.
Repository Layouts in Artifactory are mostly used to parse/tokenize artifact paths in searches, or to derive artifact metadata from an artifact path, etc. The repository layout doesn't play any roll here.
If you want to leverage the Artifactory VCS Repo API (i.e have it send out proper requests to the Github REST API to retrieve a release archive, etc), your request has to go through the API path of that repository (meaning /api/vcs/..) and not through the direct "static" download endpoint of the repo. Artifactory won't do any automatic translation of static download URIs to an API URI.
The way I see this, you have a few options:
Modify your plugin to oblige to the proper structure of the URL, if possible.
Setup some sort of rewriting of the request URI in a reverse proxy
fronting Artifactory to modify "static download" URIs to proper API
URIs before the request is relayed to Artifactory.
Write a User Plugin to intercept an event like the "afterDownloadError" event and implement some sort of "correct URL and re-send" logic there. This particular event allows you to set both the status code and InputStream of the response payload, so you could essentially override a 404 with a proper response. Here's an example of something similar that you can base your work on:
download {
afterDownloadError { request ->
log.warn("Intercepting afterDownloadError event...")
def requestRepoKey = request.getRepoPath().getRepoKey()
def requestRepoPath = request.getRepoPath().getPath()
if (requestRepoKey == "my-repo" && requestRepoPath.endsWith(".tar.gz")) {
// Do something with requestRepoPath to turn it into a proper request URI to the VCS REPO
// ...
// Fetch
def http = new HTTPBuilder(ARTIFACTORY_URL + requestRepoPath)
log.warn("Directly sending a GET request to: " + ARTIFACTORY_URL + requestRepoPath)
http.request(Method.GET, BINARY) { req ->
response.success = { resp, binary ->
log.info "Got response: ${resp.statusLine}"
if (binary != null) {
// Set successful status
status = 200
// Set context-bound inputStream to return this content to the client
inputStream = repositories.getContent(deployRepoPath).inputStream
log.warn("Successfully intercepted an error on repo my-repo" +
" returning content from: " + ARTIFACTORY_URL + requestRepoPath)
} else {
log.warn("Received 200 response with null response content, returning 404")
}
}
response.failure = { resp ->
log.error "Request failed with the following status code: " + resp.statusLine.statusCode
}
}
}
}
}
Read more about User Plugins on our Wiki.
HTH,

How to make a new Perfect Project from scratch (Swift server) in xcode?

Perfect is a new Swift Framework for creating a web/http server in swift. The documentation is not there yet and I find trouble with building a new project from scratch. I don't know which frameworks are necessary to import and which one is the entry point of the app. main.swift etc...
I'd like to make a new xcworkspace that will have my project, "a hello world server".
Problems I'm trying to tackle:
Which frameworks must be included?
How should I create a Perfect server, what's the entry point of the app?
How to create a "hello" root which responds with a "Hello World message"?
How should I make the target for the server and eventually run the server?
I managed to write a "Hello World" guide about this. http://code-me-dirty.blogspot.co.uk/2016/02/creating-perfect-swift-server.html
In a nutshell you need to proceed like this:
clone the original project
Create a new Workspace
Create a new Project
Import PerfectLib.xcodeproject & Import PerfectServer.xcodeproject but do not copy
Setup your project scheme to launch the PerfectServer HTTP App
Link the PerfectLib onn the "Linked Frameworks and Libraries" section
setup Build settings for your framework target*
Create PerfectHandlers.swift and paste(better write to get the feeling) the following code
import PerfectLib
//public method that is being called by the server framework to initialise your module.
public func PerfectServerModuleInit() {
// Install the built-in routing handler.
// Using this system is optional and you could install your own system if desired.
Routing.Handler.registerGlobally()
// Create Routes
Routing.Routes["GET", ["/", "index.html"] ] = { (_:WebResponse) in return IndexHandler() }
// Check the console to see the logical structure of what was installed.
print("\(Routing.Routes.description)")
}
//Create a handler for index Route
class IndexHandler: RequestHandler {
func handleRequest(request: WebRequest, response: WebResponse) {
response.appendBodyString("Hello World")
response.requestCompletedCallback()
}
}
Then you are ready to run. On my blog I have a long, more detailed version of this and I will update here if necessary.
Build Settings
Deployment Location: Yes
Installation Build Products Location : $(CONFIGURATION_BUILD_DIR)
Installation Directory : /PerfectLibraries
Skip Install : NO
I just wrote up a tutorial I want to share as another solution that outlines how to create a web service with Perfect and an app to interact with it.
http://chrismanahan.com/creating-a-web-service-swift-perfect
Summary
You must have your project in a workspace. This workspace should also include the PerfectServer and PerfectLib projects.
In your project, create a new OSX Framework target. This will be your server target
Link PerfectLib with both your server target and your app's target (if you're building an app alongside the server)
Edit your server's Run scheme to launch with PerfectServer HTTP App.
In your Server target's Build Settings, set the following flags:
Skip Install = No
Deployment Location = Yes
Installation Directory = /PerfectLibraries
Installation Build Products Location = $(CONFIGURATION_BUILD_DIR)
Create a new file in the server's folder. This file will handle requests that come in. Include [most of] the following:
import PerfectLib
// This function is required. The Perfect framework expects to find this function
// to do initialization
public func PerfectServerModuleInit() {
// Install the built-in routing handler.
// This is required by Perfect to initialize everything
Routing.Handler.registerGlobally()
// These two routes are specific to the tutorial in the link above.
// This is where you would register your own endpoints.
// Take a look at the docs for the Routes API to better understand
// everything you can do here
// register a route for gettings posts
Routing.Routes["GET", "/posts"] = { _ in
return GetPostHandler()
}
// register a route for creating a new post
Routing.Routes["POST", "/posts"] = { _ in
return PostHandler()
}
}
class GetPostHandler: RequestHandler {
func handleRequest(request: WebRequest, response: WebResponse) {
response.appendBodyString("get posts")
response.requestCompletedCallback()
}
}
class PostHandler: RequestHandler {
func handleRequest(request: WebRequest, response: WebResponse) {
response.appendBodyString("creating post")
response.requestCompletedCallback()
}
}
As you're building out different aspects of your service, you can test it by using cURL in the command line, or other REST testing tools like Postman
If you wanna dive deeper and learn how to integrate with a SQLite database or create an app that talks with your new server, check out the tutorial at the top of this post.
I would recommend staying away from the templates, as others suggested, and create a clean project yourself.
Create this folder structure:
MyAPI
├── Package.swift
└── Sources
└── main.swift
Then, in the Package.swift file
import PackageDescription
let package = Package(
name: "MyAPI",
targets: [],
dependencies: [
.Package(url: "https://github.com/PerfectlySoft/Perfect-HTTPServer.git", majorVersion: 2)
]
)
And the main.swift file:
import PerfectHTTP
import PerfectHTTPServer
do {
let route = Route(method: .get, uri: "/hello", handler: { (request: HTTPRequest, response: HTTPResponse) in
response.appendBody(string: "world!")
response.completed()
})
try HTTPServer.launch(.server(name: "localhost", port: 8080, routes: [route]))
} catch {
fatalError("\(error)")
}
Go to the command line and run:
swift package generate-xcodeproj
Open the generated project file:
MyAPI.xcodeproj
Change the active scheme, then build and run:
Open in safari:
http://localhost:8080/hello
I'm not sure if you have found a solution or not, but this is what I did:
The 'Tap Tracker' app is an app written the the Perfect libraries, so even if documentation isn't ready yet you can still dissect the app. I renamed the app, and the classes/methods. There's a single index.html, which I moved into a 'www' root, and then rerouted the view with TTHandler to try and make a standard layout. It doesn't work very well because the framework is so young, but it can be done. I would be much more specific but I went back to rails for the time being because I want to wait until it's a little more mature.
It's fun to mess around with, and I will probably write my own library on top of perfect onc feature innovation calms down and I can make something stable with it.
Just copy all files from one of the sample-projects to your git repository:
https://github.com/PerfectExamples
Rename the example_project_name in all files (you've copied) to your project name.
In terminal run
swift package generate-xcodeproj
And you'll get a Perfect Project with the required name.

Resources