-
|
Hi @andimarek, a while ago you had a prototype PR about replacing ENF with a newer model that decoupled the variables (one of the reasons why ENF in its current form is not cachable). In production we see some queries take 105ms @ P95 to build the ENF, so it has always been on our backlog/wishlist to contribute a change here. I noticed you actively working on Normalized Documents and wondering if this was a revisit of your original prototype to make the ENF cachable. If so:
Cheers, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
Hi @timward60 ... thanks a lot for your interest. Let me try to give some context and my thinking behind the normalized stuff, because this has never been properly documented so far: Originally the ENF was introduced mainly for the Nadel project https://github.com/atlassian-labs/nadel because it allows for a much more simpler and efficient way of rewriting/transforming requests. But over time it proved to be useful in other cases too: for example it is used inside the DataFetchingEnvironment to look ahead or used to calculate query complexity in certain cases. The main "problem" with ENF is that it requires variables as input (hence it is called ExecutableNormalized and not just normalized). If you want to make it independent of variables you have to account for the different variations of skip/include which could lead to vastly different queries. In case of ENF it is very simple: you just have to evaluate skip/include and then either ignore the fields or include them. This challenge around skip/include and the fact that ENFs proved good enough (at least so far) meant I never got around implement a Normalized version until recently. This NormalizedDocument and related classed indeed aims to be a full working normalized document/operation/field that can be cached. To note: I decided to solve the skip/include problem by providing NormalizedOperations for each possible combination of variables, which are used in skip/include. See https://github.com/graphql-java/graphql-java/blob/master/src/main/java/graphql/normalized/nf/NormalizedDocument.java#L28 Otherwise a NormalizedField is very similar to a ENF, expect the arguments can't be fully resolved, because they can reference variables. I am happy with that solution in general and believe that this NFs can be the normalized structure I talked about a while ago. The three "challenges" I can foresee around NFs are:
For example what does this mean: {
foo @cache(maxAge: 100)
foo @cache(maxAge: 200)
}There is no way of knowing how to merge these two fields together because the semantics are not clear. {
... @foo {
... @bar {
hello
}
}What does this mean? I see two approaches here: retain simply the custom operation directives inside the NormalizedOperation as additional metadata as much as possible Coming back to your question: I hope this answered your first question at least partially. Happy to share more details.
Yes please. Maybe you could share more details what exactly you use ENFs for and what you look into it.
As soon as we have some more real life validation that it is useful we can mark them public instead of experimental. Experimental is simply a way (in this case at least) to iterate faster as we can brake the contract as needed. |
Beta Was this translation helpful? Give feedback.
Hi @timward60 ... thanks a lot for your interest.
Let me try to give some context and my thinking behind the normalized stuff, because this has never been properly documented so far:
Originally the ENF was introduced mainly for the Nadel project https://github.com/atlassian-labs/nadel because it allows for a much more simpler and efficient way of rewriting/transforming requests. But over time it proved to be useful in other cases too: for example it is used inside the DataFetchingEnvironment to look ahead or used to calculate query complexity in certain cases.
The main "problem" with ENF is that it requires variables as input (hence it is called ExecutableNormalized and not just normalized).