My wall_clock() trigger always satisfied for large offset

I have a suite that’s intended to run in a realtime mode. I set my initial cycle point via next(), i.e.:

initial cycle point = next(T-00; T-15; T-30; T-45)

so that the run starts on the nearest subsequent quarter hour. The cycle point cadence is set to 15 minutes, so the first cycle point has a time component that’s between 0 and 15 minutes after right now, and the second cycle point is 15 minutes after that. The tracking of the wall clock is created by first creating an xtrigger:

[[xtriggers]]
    # external trigger condition that's met when wall clock time equals
    # cycle point time
    wall_clock_equals_cycle_point = wall_clock()

. . .and then referencing that xtrigger in my scheduling graphs, making it a dependency for tasks that start the processing in a cycle point.

I have extensively tested this, both via simulation and using more elaborate stub task scripts (needed because some of the tasks edit the suite graph). All my tests have gone just fine.

However, now I need to test the full suite on real data. Because I do not have a realtime data feed, I need to run the suite over a historical interval, as if that was realtime. One way to do that would be to reset the date/time clock of the system on which the suite would be running, changing it to the past time period I want to simulate, the period where I have data. There are all kinds of reasons why I need to avoid that route. So, instead, my plan was to change Cylc’s timekeeping. I first changed the initial cycle point to 1 year 7 months ago, e.g.:

initial cycle point = next(T-00; T-15; T-30; T-45) -P1Y7M

This appears to work fine: the cycle point values look exactly as desired. Then, I changed the xtrigger to have a comparable offset, i.e.:

[[xtriggers]]
    wall_clock_equals_cycle_point = wall_clock(offset=P1Y7M)

This isn’t working right. The tasks in the first two cycle points that have a dependency on @wall_clock_equals_cycle_point are evaluating as having that dependency satisfied right at suite startup, even though the cycle points they’re associated with should be a few minutes, and 15 + a few minutes, after the offset wall clock value.

What am I doing wrong?

Thanks!

1 Like

Well done (and apologies) - you appear to have found a bug in the wall_clock xtrigger.

I think what’s going on is, the initial cycle point computation is correct - Cylc is subtracting P1Y7M from the current time (mod next(..)) but the wall clock offset is being parsed into seconds as a context free interval, which in the case of months and years involves assumptions (how many days in a month…)

You’re probably the first to notice this because we don’t typically want to run historical cycle points on a current clock trigger, or use a massive clock offset. (It’s a good idea though, for your purpose).

We should be able to fix this. As a workaround, convert your interval to avoid months and years, e.g. P1Y7M is ~P582D (plus or minus).

Here’s my test w(orkflow (Cylc 8 syntax):

[scheduler]
    cycle point time zone = +1300  # same timezone my box is running
[scheduling]
    initial cycle point = next(T--00) - P582D   # next 00s - 582 days
    runahead limit = P1  # 1 active cycle at a time, easier to observe
    [[xtriggers]]
        wait = wall_clock(offset=P582D)
    [[graph]]
        PT1M = "@wait => foo"
[runtime]
    [[foo]]

This correctly runs 1 task every minute in real time, despite the cycle points being 582 days in the past.

If I change P582D to P1Y7M the clock trigger is (suspciously!) exactly 4 days off.

2 Likes

Fix tracked here: Fix clock-trigger offset computation. by hjoliver · Pull Request #4511 · cylc/cylc-flow · GitHub

I tested the workaround, and it works for me! Thanks!

1 Like

Thanks @funkapus - I have exactly this same need, and was gearing up to ask whether there was an option for a wallclock offset for testing nominally real-time suites w canned data intervals - hadn’t thought about the xtrigger way of addressing this! Thanks @hilary.j.oliver too - will use fix you mention.