Safety is an Epic Split
For Responsible AI, Flexibility is Key
I don’t expect answers to the complexities of AI governance to be found in an award-winning TV commercial from 13 years ago. TV ads traffic in symbolic meaning, not audit criteria, compliance schemes and regulations. Social critic Neil Postman puts it this way [sell you,,,,. And yet (pause for dramatic effect),
I submit to you Jean Claude Van Damme’s 2011 commercial for Volvo trucks. It’s not a white paper, report, best practice or piece of legislation - just Van Damme doing an epic split between two 18 wheelers.It may sound counterintuitive to put forth this as an example of what Responsible AI might look like. It looks the opposite of safe. But to keep pace with AI’s exponential growth demands just this level of balance and flexibility on the part of the responsible AI practitioner. The prevailing narrative of safety is one of where, through legislation or compliance, the truck stops to be inspected. Its this static, cumbersome framework that accelerationists point to when they clamor for deregulation, a no guardrails mindset that is rife with downstream harms By being agile, and adaptable, by not having tension, both Responsible AI and Acceleration stand to benefit. The catch here is to keep them driving at the same speed. Otherwise, well, you can just imagine what happens.
This is accelerating responsible innovation in action - it’s an epic split moving at the speed of change down a shared road, one that benefits us all.
Maybe this truck thing comes from the fact that my dad was a truck leasing consultant.
Some AI developers refuse to acknowledge there’s a speed limit, or even a road for that matter. And if they do, they floor the gas hoping no one’s looking.