Discussion about this post

User's avatar
Joe Sterling's avatar

Thanks for the report, Tobin. I have become much more concerned about AI safety concerns after reading AI-2027.com, the International AI Safety Report 2025, and watching the April 26, 2025. CBS Interview with Nobel laureate Geoffrey Hinton on the tempo of AI development and perils that he is anticipating. Nobel laureate Geoffrey Hinton, is a colleague of Ray Kurzweil and a former Google AI researcher. I highly recommend these three sources to put some perspective around the risks that are emerging behind all the fun stuff LLMs can do.

As you know, I am an early adopter and experimenter. I've been teaching programs to help folks engage with the new tools. But the forecasts by these credible sources, in the context of the current White House/Senate/House non-regulating, are very unsettling. As Hinton puts it, "We've brought home a really cute tiger cub. We have nowhere near enough understanding of what it will grow into." More importantly, we can't trust the people who are delivering it to protect us from it.

I look forward to reading credible and trustworthy news that these concerns are being addressed. I can't decide if what Semafor reported this week about OpenAI observing "more sycophantic" behavior in the latest model is good news or bad. The AI-2027 team would say this is exactly what we've been warning you about. https://www.semafor.com/article/05/06/2025/chatgpts-latest-model-noticeably-more-sycophantic

Expand full comment
Tobin Trevarthen's avatar

Thank you for your comment. I am glad my aha and WTF moments...translate. We are all in this growth curve together. I am in the "I don't know what I don't know" modality.

Expand full comment

No posts