I’ve been having a lot of fun seeing how easy it is for students to inject their own instructions into LLMs in order to bias their feedback: