The app for independent voices

I recently spoke to a room of senior leaders at McGraw Hill about one of the most common mistakes I see institutions and organizations make with AI: scaling before they pilot.

A new AI tool shows promise, leadership gets excited, and suddenly it's rolled out across departments before anyone has asked basic questions about security risks, acceptable use, or human oversight.

Here's what I told the room: start with one program you trust. Identify the risks. Build governance that isn't ad hoc or crisis-triggered — but standing oversight that covers AI use in teaching, assessment, and procurement on an ongoing basis.

Then scale what works.

This short clip captures the core of that message. The full session goes deeper into how to build an AI governance structure that holds up under pressure — from vendor vetting to faculty training to measuring what actually improves learning outcomes.

If your institution or organization is navigating these same questions and you'd like me to bring this conversation to your leadership team, conference, or retreat, I'd love to connect: aviva@avivalegatt.com

What's your institution's biggest challenge when it comes to piloting AI tools safely? I want to hear it.

Apr 9
at
12:00 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.