Passing CYLC_TASK_CYCLE_POINT to jinja filter


Is it possible to pass CYLC_TASK_CYCLE_POINT to a jinja filter.
For e.g.
{{ CYLC_TASK_CYCLE_POINT | strftime(‘%H’) }}




Jinja2 is processed at startup before Cylc reads the workflow configuration rather than when tasks are run so it is not possible for Jinja2 to access this variable.

But you can do it in Bash:

isodatetime "${CYLC_TASK_CYCLE_POINT}" --print-format '%H'


Hi @prajeeshag,

To expand on Oliver’s reply a little:

In Cylc, Jinja2 acts as a preprocessor to programmatically generate the final workflow configuration that the scheduler loads at start-up.

A task’s cycle point is determined at run time, when each new instance of the task is created as the workflow moves through the task graph. Particular cycle points don’t “exist” at the outset, because in principle there could be no end to the sequence of them.

At runtime, the scheduler sets CYLC_TASK_CYCLE_POINT in the job environment, so that the running job can know it’s own cycle point if it needs to. So, the task job script can read and manipulate the value at run time, by way of any command or program that can read from the environment. The isodatetime program mentioned by Oliver, gets installed with cylc-flow, and is designed specifically for manipulating ISO8601 datetimes.

I hope that helps.

@oliver.sanders @hilary.j.oliver Thanks. I understand this. I felt this would be the case, but I got confused by an earlier thread where jinja was mentioned as a possible solution. (Arithmetic with $CYLC_TASK_CYCLE _POINT in standard date/time format - #2 by sadie.bartholomew )

Anyways, what I actually need to do is to find the exact number of days between two task cycle points. I will be providing the CYCLE_INTERVAL in years, e.g ‘P10Y’. I was thinking of writing a Python script that will take the ‘CYCL_TASK_CYCLE_POINT’ and ‘CYCLE_INTERVAL’ and return the duration in days.

Is there any better way to do this?


Finally, I did it like this in the job script.




Sorry about that, looks like an error. I have gone back and amended the post, for future readers.