task proxies and the task pool in the workflow service
what ghost nodes are in the graph view: tasks at that cycle point that currently have no associated task proxy at the back end
tree and dot views do not currently show “ghost tasks” but should. Users should not be expected to understand the difference between a waiting task at cycle point N and one that does not exist yet (because that task proxy has not spawned beyond cycle point N-1 yet)
In cylc-7 only the graph view uses the graph data, and it colours is nodes that have corresponding task status data - any remaining un-coloured nodes are ghost nodes. Other views just take a list of all tasks that reflects the task proxy pool at the back end (with ghosts missing).
Cylc can have tasks with different cycling intervals in the same workflow, so we can’t just assume that every task appears in every cycle point. It is the graph determines what tasks are possible in each cycle point.
The graph is presented as a flat list of edges, i.e. pairs of connected nodes. Graph libraries construct the graph from the edges.
To have ghost tasks in all views, all views will need the graph data (edges) client-side, or the graph data will need to be consulted server-side to add ghosts to the task status data (probably the latter?).
So: ghost tasks should be pretty easy in the new UI.
DS also commented, on GraphQL:
queries can return a flat list of task data, which could be used to construct the tree view client side, OR a nested family tree structure that might be able to feed the tree view directly (the family tree depth has to be
known in this case, to construct the query).
it is possible to get from graph edges do task data, so if using the graph client-side we would not necessarily need to separately retrieve the graph and the task proxies.
need to stop user “TaskProxy” in the GraphQL schema, because it doesn’t have the same meaning as in the server program.
Can we get DS’s PR merged soon as possible, to use as a basis for further development? OS’s problems with it may not matter much as switching out Protobuf, or not, is pretty easy, and protobuf and GraphQL work perfectly well together (if we have GraphQL at the WS too).
Also noted, main blocker on DS’s PR (if OS is otherwise happy enough) is lack of test coverage. Discussed with BK how to add the right unit tests … but at this stage DS won’t do that until we have confirmation that the PR is otherwise good to go for now.
That’s right; at the moment protobuf is the driving data for graphql. And this protobuf data is identical at both the WS and UIS, so a GraphQL endpoint can be at both ends (an additional change; no impact on current PRs). Protobuf is not only a data store format by also ideal for communication over ZeroMQ like tech, and I have a plan for incremental updates using it. But, it is very easy to replace if we decide to.
I want the PRs to go in first before more changes are made for a number of reasons:
The replacement of Protobuf with Dictionaries or Class containers will have low impact on the code (very easy interchange)
I don’t want the PRs to get too bloated; protobuf or not, and incremental updates can be follow-on PRs
Others can then contribute, and it exposes it out of my personal fork
The PRs are necessary for GUI development
However, I don’t want to proceed without OS giving approval to move forward in this way.
Just to note, other issues to do with cross-version compatibility between the components probably still need to be worked through. (And how much do protobuf and GraphQL help with this).
The hardest part for WS-UIS (to me), would be figuring out how to incrementally update the WS data… I have a clear idea on how to use ZeroMQ PUB/SUB to keep the UI Server data identical (albeit theoretical).
With the driving data at the UIS in sync, updates will be triggers for GraphQL subscriptions between UIS-UI. The UI client will subscribe to data channel(s), and pushed data will have an associated event flag that the JS libraries hook onto. The UIS will need to be changed to websocket and there’s a PR in the graphql-python group to help implement subscriptions with graphene-tornado.
For a quick temporary solution could we just put the proto message in a dictionary {"data": proto_bytes} then these messages could use the same JWT security, just have to remember to decode bytes=>str on the client side.