If we respond to our AI moment by re-naturalizing forms of assessment we know are artificial simply because they're un-hackable, we risk entrenching the very problem Dr. Blum identifies. The danger of this phantom objectivity (to borrow from Lukács) is that it makes arbitrary conventions—the five-paragraph essay, the timed high-stakes exam, the canned blue book response—appear as the only natural and legitimate measures of learning rather than outmoded constructions that decades of academic research have determined have little predictive value of scholarly, professional, or vocational success beyond the classroom. By treating schoolish assessments as possessing inherent educational value simply because “it worked before” (note: not really), we don't just preserve outdated practices; we actively conceal how these very mechanisms created the conditions that made AI-outsourcing seem like a rational response. Instead of transcending the synthetic approaches that drove students to AI in the first place, we would …