You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Effort Expenditure for Rewards Task with probabilistic reward outcomes.
Created By
TaskBeacon
Date Updated
2026年02月24日
PsyFlow Version
0.1.9
PsychoPy Version
2025年1月1日
Modality
Behavior
Language
Chinese
Voice Name
zh-CN-YunyangNeural
1. Task Overview
This task implements an EEfRT-style paradigm in which participants choose between low-effort/low-reward and high-effort/high-reward options under varying reward probabilities. After choosing, participants perform an effort execution stage and then receive completion and reward feedback. The task captures effort allocation, completion rate, and reward-sensitive decision behavior.
2. Task Flow
Block-Level Flow
Step
Description
1. Parse mode and config
main.py loads human, qa, or sim runtime mode and the selected YAML config.
2. Initialize runtime
Window, keyboard, trigger runtime, and stimulus bank are initialized.
3. Prepare offers
For each block, a custom condition generator builds deterministic (probability, hard_reward, fallback, reward-draw) trial specs.
4. Execute trials
Trials run through src/run_trial.py with full context logging.
5. Block summary
Block metrics are shown (high-effort rate, completion rate, block reward).
6. Final summary
Final task metrics are shown and trial data are saved.
Trial-Level Flow
Step
Description
Offer fixation
Central fixation before offer display.
Offer choice
Participant chooses low vs high effort option with response deadline.
Ready stage
The chosen effort requirement and deadline are shown.
Effort execution
Participant presses effort key repeatedly within the deadline.
Effort feedback
Completion or failure message is presented.
Reward feedback
Probabilistic reward outcome is presented (or no reward if not completed).
Inter-trial interval
Fixation interval before the next trial.
Controller Logic
Component
Description
Adaptive controller
Not used. EEfRT does not require dynamic RT-window adjustment.
Participants complete an EEfRT-style effort-based choice task. On each trial, reward probability and low/high effort options are presented. Participants choose an option and then perform the selected effort requirement using repeated keypresses within a time limit.
If the effort criterion is met, reward outcome is sampled according to trial probability. If the criterion is not met, reward is set to zero. Trial-level records include offer parameters, selected option, effort completion status, reaction timing, and reward outcome.
The implementation supports human, qa, and sim modes with consistent trial logic and responder-context instrumentation for reproducibility and auditability.
This refactor removes the generic task controller object and replaces it with explicit condition-generation utilities so the EEfRT offer logic and stochastic outcomes are easier to audit.