Dash Offline Basal Fallback Feasibility Review
Last updated: 2026-04-02 12:08 ET Status: Team review required Owner: BionicLoop engineering
Purpose
This note evaluates a specific question for DASH operation:
- If phone and pump are separated for more than a defined threshold, can the pod already have a pending fallback basal that will activate without further app communication?
This is narrower than the existing general "offline basal fallback" idea already captured in product and architecture docs. The core question here is whether the fallback can be armed ahead of time so that it starts later on the pod if the phone is no longer able to communicate.
Executive Summary
Current repo evidence supports this conclusion:
- A pod-side autonomous "pending fallback basal" is not currently shown to be supported by the BionicLoop pump abstraction or the current OmniBLE surface.
- The current codebase clearly supports only immediate dosing actions: immediate temp basal, cancel temp basal, suspend, resume, and immediate basal schedule replacement.
- A simpler fallback is feasible in principle: when the offline threshold is reached and communication is still available, the app could issue an immediate fallback basal command at that time.
- If the app and pump are already separated before the threshold is reached, the current codebase does not show a safe or proven way to make the pod begin a new fallback basal later without fresh communication.
- Resume accounting is a real blocker, not a detail. If offline basal is ever delivered during a communication blackout, the resumed algorithm path needs a reliable representation of what basal insulin was actually delivered during the blackout. The underlying algorithm bridge supports this concept, but the current app/core host path does not yet expose enough delivery history.
Recommended product position for now:
- Do not commit to a "pending basal on DASH" design yet.
- If offline basal fallback remains a requirement, treat phase 1 as an app-issued immediate fallback at threshold while comms is still alive.
- Require a separate design for resume-time basal reconciliation before any shipping implementation.
Current Documented Intent
The repo already describes offline basal fallback as a planned behavior:
- Requirements: proposed offline basal after
>= 15 minuteswith no valid CGM plus algorithm run. - Architecture: revert to predefined offline basal on DASH, show offline indicator, then replace with current algorithm recommendation once CGM resumes and the algorithm runs.
- OmniBLE API notes: same proposal, with a note to reconcile missing delivery events when returning online.
- Execution Plan: "Extended fallback execution (offline basal) implementation" is still parked, not implemented.
Important distinction:
- Those docs describe an offline fallback concept.
- They do not prove that DASH currently supports a pre-armed delayed fallback which starts later without new controller communication.
What The Current Pump Surface Actually Supports
BionicLoop pump abstraction
The app/core pump abstraction currently exposes only immediate actions:
refreshStatus()deliverBolus(units:)setTempBasal(rateUnitsPerHour:duration:)cancelTempBasal()suspendDelivery()resumeDelivery()
See:
BionicLoopCore/Sources/BionicLoopCore/Ports/PumpService.swiftBionicLoop/Integrations/Pump/PumpServiceAdapter.swift
There is no app-facing concept of:
- "schedule this basal to begin later"
- "arm this fallback only if communication is lost"
- "program a watchdog timer that changes delivery mode if no new controller command arrives by time T"
Current answer at the app layer:
- BionicLoop today can issue only immediate commands.
- It cannot queue a second future command behind a current command.
OmniBLE temp basal support
Current OmniBLE dosing support is immediate:
enactTempBasal(unitsPerHour:for:)runTemporaryBasalProgram(unitsPerHour:for:automatic:)
See:
OmniBLE/OmniBLE/PumpManager/OmniBLEPumpManager.swift
The temp basal API takes only a rate and duration. It does not expose a delayed start time or a conditional "activate later if disconnected" primitive.
Important command constraints confirmed in the current OmniBLE implementation:
- supported temp basal durations are
30 minutesto12 hours - duration
0is a special cancel/resume-scheduled-basal case, not a 0-minute future programming primitive - supported temp basal rates do include
0 U/hr
Implication:
- a
0 U/hrtemp basal is available - a
15-minutetemp basal is not available through the current Dash surface
This matches the public DASH user-facing behavior described in the official
Omnipod DASH Technical User Guide,
which documents temp basals as 30 minutes to 12 hours and describes basal
program changes as explicit immediate activation decisions rather than delayed
future-start programming.
OmniBLE basal schedule support
OmniBLE basal schedule support
OmniBLE does also support a basal schedule and immediate basal schedule replacement:
OmniBLE/OmniBLE/OmnipodCommon/BasalSchedule.swiftOmniBLE/OmniBLE/PumpManager/OmniBLEPumpManager.swift(setBasalSchedule)
But that is not the same as a future pending fallback.
Current evidence shows:
- the schedule is a full 24-hour basal profile
- writing it is immediate
setBasalSchedule(...)cancels current delivery before saving the new profile
So, while the pod can autonomously run a programmed basal schedule, the current repo does not show a clean one-shot "activate fallback later if controller goes away" feature. Repeatedly rewriting the pod basal schedule as a rolling future fallback is only a theoretical idea at this point, and would be a high-risk design change because it changes the pod's underlying autonomous schedule, not just a temporary command.
Additional current-app constraint:
- BionicLoop does not currently maintain a dynamic algorithm-owned basal program on the pod.
- The current DASH setup flow initializes the pod-facing basal schedule with an
all-day
0.0 U/hrschedule during onboarding. - No current app runtime path continuously rewrites the pod basal schedule on every algorithm step.
So there is no existing "true dosing basal rate" sitting on the pod that a temp basal would naturally revert to in closed-loop operation.
Direct Answer To The Proposed Command Pattern
Question under review:
- With each successful algorithm step while the pump is available, can we set an
immediate
15-minutewindow at0.0 U/hr, and have a true dosing basal rate begin after that window so it only matters if the phone later stops talking to the pump?
Current answer:
- Not with the currently demonstrated command surface.
Why not:
- Temp basal duration cannot be
15 minutes. - Current Dash support here is
30 minutesto12 hours, with0reserved for cancel/resume. - Temp basal does not queue a second future dose command.
- When temp basal ends, the pod reverts to the active basal program.
- There is no current app command such as "start this other basal later."
- BionicLoop does not currently keep a dynamic algorithm-owned basal program on the pod.
- The current app setup path seeds a
0.0 U/hrbasal schedule. - The runtime does not currently rewrite basal schedule on every loop step.
- Rewriting the full basal schedule every step to fake a delayed future start would be a fundamentally different architecture.
setBasalSchedule(...)is immediate.- It cancels current delivery and replaces the active pod basal program.
- It is not a delayed conditional fallback mechanism.
What is technically closest?
The closest currently available patterns are:
- Immediate
0 U/hrtemp basal for30 minutes, after which the pod reverts to whatever active basal schedule is already programmed. - Immediate basal schedule replacement, where the pod begins following the new schedule right away based on the current schedule offset.
Neither of these is the same as:
- "arm a pending fallback to activate in 15 minutes if the controller goes silent"
Why the "0 now, real rate later" workaround is risky
Even if engineering tried to emulate this by rewriting the full basal schedule, the behavior would be risky because:
- the schedule replacement is immediate, not pending
- the rewrite would affect the pod's autonomous baseline delivery, not just a temporary fallback
- the runtime would need to keep pod schedule state synchronized with every algorithm step
- resume accounting would become even more important because the pod would be autonomously delivering from a rewritten schedule during blackout
Feasibility Assessment By Approach
Approach A: Pre-armed pending basal on pod that starts later after separation
Assessment: Not established by the current codebase. Low confidence / unproven.
Why:
- No current BionicLoop abstraction exposes delayed or conditional pump commands.
- No current OmniBLE temp basal surface exposes delayed start.
- The only autonomous pod-side programmable delivery behavior visible here is the regular basal schedule, and the available API replaces it immediately.
- The repo does not contain a pod-side watchdog concept such as "if no controller contact by threshold, switch to fallback basal."
What would need to be proven before this could be treated as feasible:
- that the Dash protocol can encode a future-start or conditional basal behavior at the pod level, or
- that a rolling basal-schedule rewrite approach is clinically acceptable, technically robust, and compatible with the current closed-loop design
Current conclusion:
- Do not assume this is feasible from existing implementation evidence.
Approach B: App issues fallback basal immediately when threshold is reached and comms is still alive
Assessment: Feasible in principle. Medium confidence.
Why:
- Current surfaces can issue immediate temp basal.
- Current runtime docs already contemplate entering an offline mode after a threshold.
- User-facing offline indicators and alerting fit the current runtime model much better than a hidden pod-side pending schedule.
Major work still required:
- explicit offline-mode state machine
- threshold monitor and trigger ownership
- fallback basal selection policy
- duration cap and user action rules
- resume/recovery reconciliation
- verification on real hardware
Approach C: Separation happens before threshold, and no comms is available at threshold
Assessment: Not feasible with currently demonstrated behavior.
Why:
- if the phone cannot communicate with the pod at threshold time, the app cannot issue a new temp basal then
- the current repo does not demonstrate a previously armed delayed fallback that the pod can autonomously start later
Practical implication:
- if the product requires safe fallback even when separation begins before the threshold is reached, then either:
- a real pod-side autonomous fallback capability must be proven, or
- the system must rely on whatever autonomous basal program is already running on the pod by design
Resume And Reconciliation: Why This Is A First-Class Design Problem
Current positive evidence
The pump integration already has recovery-oriented building blocks:
recoverUnacknowledgedCommand(using:)inOmniBLE/OmniBLE/PumpManager/PodCommsSession.swiftdosesForStorage(...)inOmniBLE/OmniBLE/PumpManager/PodCommsSession.swift- storage of
NewPumpEventrecords inBionicLoop/Integrations/Pump/AppPumpManagerDelegate.swift
This is useful because it means BionicLoop does have a basis for reconciling what happened on the pod after uncertainty or disconnection.
Current limitation in the algorithm host path
The algorithm bridge can represent basal backfill across a blackout:
basalInsulinDelivereddeliveryTime
See:
BionicLoopCore/Sources/Algo2015Bridge/include/AlgorithmInterface.h
That is exactly the sort of information needed when a delivery happened during a communication gap and the resumed step needs to account for it.
But the current app/core host path is much narrower:
PumpStatuscarries only onelastDeliveryrecordRealBUDosingAlgorithmonly maps one request time plus requested/delivered units into the algorithm input
See:
BionicLoopCore/Sources/BionicLoopCore/Domain/PumpStatus.swiftBionicLoopCore/Sources/BionicLoopCore/Algorithms/RealBUDosingAlgorithm.swift
Conclusion on algorithm needs
Yes, the algorithm effectively needs delivered-basal information on resume if an offline basal fallback has been running during a blackout.
Without that:
- first resumed dosing decisions may underestimate or overestimate insulin on board
- recovery behavior will depend on incomplete delivery history
- the system could double count, undercount, or miss fallback insulin delivered while disconnected
So resume accounting is not optional. It is part of the core safety case for offline basal fallback.
Safety And Product Risks
Main safety concerns:
- Incorrect autonomous insulin after loss of communication.
- False belief that fallback started when the pod never actually accepted it.
- Double counting delivered insulin on resume.
- Missing delivered insulin on resume.
- Interference between fallback basal and current suspension / temp basal / command uncertainty states.
- Extended offline duration without clear patient-visible status.
- Hidden controller-schedule behavior that clinicians and operators cannot easily understand.
Specific design risk with a rolling future-schedule approach:
- If the app repeatedly rewrites the pod basal schedule to keep pushing out a fallback threshold, schedule-management bugs could directly affect autonomous pod behavior even while closed-loop is otherwise healthy.
Recommended Direction
Recommended near-term position
- Treat a true pod-side pending fallback as unproven.
- Do not implement it based only on current app-layer assumptions.
- If the product still needs offline basal fallback, pursue the simpler threshold-time immediate fallback command first.
- Gate any implementation on resume-time delivery reconciliation design.
Recommended product language
Use this framing in team discussions:
- "Offline basal fallback is potentially feasible as an app-issued immediate fallback when the threshold is reached and the pump is still reachable."
- "A pre-armed pod-side pending fallback basal is not yet demonstrated by the current Dash/OmniBLE implementation surface."
Refined Design Under Review: Masked Fallback Schedule
The current team discussion has narrowed from "pending basal that starts later" to this design:
- keep an offline fallback basal schedule active on the pod using
setBasalSchedule(...) - during healthy closed-loop operation, keep that schedule masked with repeated
0.0 U/hrtemp basal commands - if communication is lost, the most recent
0.0 U/hrtemp basal eventually expires and the underlying fallback basal schedule becomes active
Mechanical feasibility
This is more plausible than a true delayed-start pending basal, but it is still not a trivial change.
What currently lines up:
- Dash supports
0 U/hrtemp basal. - Dash temp basal minimum duration is
30 minutes, which is acceptable if the agreed watchdog window can be30 minutesrather than15 minutes. - Dash supports immediate basal schedule replacement through
setBasalSchedule(...).
What this means operationally:
- the fallback schedule would not be "pending"
- it would be the currently active pod basal program underneath a continuously
renewed
0.0 U/hrtemp basal mask
Critical command-order constraint
The order matters:
- write or refresh the fallback basal schedule
- immediately apply the masking
0.0 U/hrtemp basal
Why:
setBasalSchedule(...)cancels current delivery and immediately activates the new schedule- if the schedule refresh succeeds and the following mask command fails, the fallback basal starts immediately
This is the main safety hazard of the masked-schedule approach.
Daily refresh possibility
If the fallback schedule changes slowly, then a daily
setBasalSchedule(fallbackSchedule) refresh may be enough instead of rewriting
the schedule every 5 minutes.
That is attractive because it reduces:
- schedule churn
- immediate-activation risk frequency
- command traffic
But even with daily refresh, the design still depends on:
- reliable repeated
0.0 U/hr / 30 minutemasking while comms is healthy - a clearly defined source for the fallback schedule
- resume-time accounting when the fallback schedule delivers during blackout
Fallback Schedule Source: Fixed 120 mg/dL Target
The current design question is not just whether the pump can run the fallback. It is also:
- how BionicLoop should derive the fallback basal schedule, specifically for a
target equivalent to
120 mg/dL
Current target input support
The good news is that 120 mg/dL is already a first-class algorithm input in
this repo.
Current path:
- clinician config persists
targetMgdl - runtime constructs the algorithm with
setPointNominalandsetPoint RealBUDosingAlgorithmpasses those values directly into the bridge each step
See:
BionicLoop/Runtime/ClinicalAlgorithmConfigStore.swiftBionicLoop/Runtime/LoopRuntimeEngineOperationalSupport.swiftBionicLoopCore/Sources/BionicLoopCore/Algorithms/RealBUDosingAlgorithm.swiftBionicLoopCore/Sources/Algo2015Bridge/Algo2015Bridge.c
So, if fallback should truly represent a 120 mg/dL target, the input side is
not the hard part.
Why a scalar factor is not acceptable
One tempting shortcut is:
- run the live algorithm at the current target
- apply a factor to the resulting basal-related output
- treat that as the
120 mg/dLequivalent fallback
The current code and tests do not support that assumption.
Why not:
- target is a direct algorithm input, not an external post-processing knob
- the algorithm is stateful across steps
- current tests only prove monotonic direction, not linear or proportional convertibility
- production output currently exposes requested insulin for the actual run, not a transformable "target factor" contract
Evidence:
BionicLoopCore/Tests/BionicLoopCoreTests/Algo2015MetamorphicTests.swiftBionicLoopCore/Tests/BionicLoopCoreTests/Algo2015DifferentialReplayTests.swift
Safe conclusion:
- higher target tends not to increase insulin
- lower target tends not to decrease insulin
- but the repo does not justify using a fixed multiplier to convert one target's output into another target's output
Recommended algorithm contract
If the fallback schedule must reflect a true 120 mg/dL target, the safest
design is:
- run a dedicated fallback algorithm track configured at
120 mg/dL - persist state for that fallback track separately from the live control track
- derive the fallback schedule from that dedicated track, not from a factor applied to the current live-target run
This could be implemented as a shadow algorithm instance or an explicit fallback basal provider contract. Either way, it should be treated as a real algorithm integration feature, not a UI/runtime heuristic.
Candidate data surfaces
There are two broad ways to source fallback basal data:
- Recommended:
-
add an explicit runtime/algorithm contract for fallback basal rate or schedule generation at target
120 -
Not recommended without algorithm-owner signoff:
- infer fallback schedule from inspection-only fields such as
hourlyBasalBasisActive,hourlyBasalBasisNominal, or related inspection snapshots
Those inspection fields do exist:
BionicLoopCore/Sources/BionicLoopCore/Domain/AlgorithmInspectionTelemetry.swiftBionicLoopCore/Sources/BionicLoopCore/Algorithms/Algo2015InspectionSnapshotAdapter.swift
But they are currently inspection/debug surfaces, not the production dosing contract. Shipping the fallback schedule from those fields without a formal contract would be risky.
Resume And Reconciliation Impact For The Masked-Schedule Design
The masked-schedule design does not remove the recovery problem. It makes it more important.
If the mask expires during communication loss and the fallback schedule delivers autonomously, the resumed algorithm path must know:
- how much basal was delivered
- over what period
- whether delivery certainty is known or uncertain
Current bridge capability is promising:
basalInsulinDelivereddeliveryTime
Current host/runtime limitation remains:
- only one
lastDeliveryrecord is carried intoPumpStatus - production algorithm execution currently treats
basalInsulinRequestedas a 5-minute micro-dose output, not as a reusable hourly schedule source
So any masked-schedule implementation must include a dedicated resume-time basal reconciliation design before it should be treated as shippable.
Conflict With The Current Degraded-Stepping Model
There is a direct architectural conflict between autonomous fallback basal on the pod and the current live algorithm stepping policy.
What the current runtime does today
Current runtime policy intentionally allows the live algorithm to keep stepping when CGM is fresh but pump status is unavailable:
- the coordinator refreshes pump status and falls back to an unavailable pump input tuple when refresh fails or pump state is unknown
- the algorithm still executes that 5-minute step
- only pump command application is blocked until pump status recovers
See:
BionicLoopCore/Sources/BionicLoopCore/Runtime/LoopRuntimeCoordinator.swiftBionicLoopCore/Tests/BionicLoopCoreTests/LoopRuntimeCoordinatorPumpAvailabilityExecutionTests.swiftBionicLoopCore/Tests/BionicLoopCoreTests/SimulationHarnessTests.swift- Requirements
- Architecture
This policy is correct for the current product because the pod is not expected to continue autonomous algorithm-directed fallback dosing underneath those degraded steps.
Why autonomous fallback basal changes that assumption
If the pod can begin delivering fallback basal while communication is lost, then the live algorithm cannot safely continue advancing through those same 5-minute intervals with "pump unavailable" sentinel input and no delivered-basal truth.
Otherwise:
- the live algorithm state will already have advanced across the blackout interval
- the algorithm's internal stacked-insulin state will not include the fallback basal actually delivered on the pod
- reconnect-time "replay" becomes a state-correction problem against already advanced live state, not a clean deterministic reconstruction
In other words:
- current degraded stepping is compatible with "no dosing applied while pump is unavailable"
- it is not compatible with "pod may be autonomously dosing fallback basal during that same interval"
Implication for a masked fallback schedule
If masked fallback basal is ever pursued, BionicLoop will likely need a new policy for those intervals:
- detect when fallback basal may have become active
- stop treating the current live degraded-step path as authoritative state
- prevent normal closed-loop resume until delivered basal across the blackout interval is reconciled with sufficient confidence
There are only a few viable architecture options:
- Recommended:
- freeze live algorithm step advancement once fallback basal may be active
-
on reconnect, replay missed 5-minute steps using historical CGM/BG inputs plus reconstructed delivered basal per step
-
Higher risk:
-
continue a speculative live state during outage, but discard it and rebuild authoritative state by replay on reconnect
-
Only viable with extremely strong evidence:
- continue stepping live only if the host can deterministically inject the exact fallback basal delivered on each missed step as trusted input
At present, the repo does not show enough host-side delivery history or reconciliation fidelity to support option 3.
Replay viability conclusion
Replay is still the best recovery shape if autonomous fallback basal is used, but only if the system can reconstruct:
- the exact time the masking
0.0 U/hrtemp basal expired - the fallback schedule that was active underneath
- whether pod state remained suitable for delivery throughout the blackout
- per-step basal delivered across the missed 5-minute intervals
If those facts cannot be reconstructed with high confidence, the safe fallback is not replay. It is blocked closed-loop resume plus a degraded/manual recovery path until delivery truth is re-established.
Proposed Investigation And Test Plan
Phase 1: Capability proof
- Confirm at the Dash protocol / OmniBLE maintainer level whether a delayed or conditional pod-side fallback basal exists at all.
- If not, explicitly close the "pending basal" concept as unsupported and move to app-issued threshold fallback only.
Phase 2: Immediate fallback prototype
- Implement a bench-only prototype that issues fallback temp basal when the offline threshold is reached while comms is still available.
- Verify pod acceptance, duration handling, alerting, and user-visible offline state.
Phase 3: Resume accounting prototype
- Reconnect after fallback basal has been active.
- Reconcile delivery using
dosesForStorage(...),NewPumpEvent, and pod status where needed. - Design the host-side mapping needed so resumed algorithm steps can consume delivered basal during blackout, not just one last-delivery record.
Phase 4: Failure drills
Run explicit scenarios:
- Separation begins before threshold.
- Separation begins after fallback temp basal has already been accepted.
- Fallback command is sent but acknowledgement is uncertain.
- Reconnect occurs before fallback duration ends.
- Reconnect occurs after fallback duration ends.
- Pump is suspended, faulted, or has no active pod at the time fallback would otherwise be entered.
Questions The Team Needs To Decide
- Is the product goal specifically a pod-side autonomous pending fallback, or is threshold-time immediate fallback sufficient?
- What is the approved threshold for entering offline mode?
- What basal source should offline fallback use:
- fixed conservative subject schedule
- last known safe rate
- last accepted algorithm rate
- dedicated fallback algorithm output at
120 mg/dL - What maximum offline duration is acceptable before requiring user action?
- Is fallback target always fixed at
120 mg/dL, or should it follow the currently active clinical target? - If fallback target is fixed at
120 mg/dL, do we require a dedicated shadow algorithm track rather than any output-scaling shortcut? - What resume-time accounting is required before dosing can safely resume?
- Do we require this feature for the current IDE scope, or does it remain a post-baseline enhancement?
Bottom Line
Based on the current repository:
- "Can DASH hold a pending basal that activates later if the phone is gone?" Answer: not proven by the current app or OmniBLE surface.
- "Can BionicLoop issue an offline fallback basal once the threshold is reached while the pod is still reachable?" Answer: yes, likely feasible.
- "Can BionicLoop safely estimate a
120 mg/dLfallback schedule by applying a factor to the current live-target output?" Answer: no. Current code and tests support target as a first-class algorithm input, not as a safe post-processing multiplier. - "Does the algorithm need delivered basal information when communication resumes?" Answer: yes. That is a core safety requirement for any offline basal feature.