If you don't shape the AI algorithm, the algorithm will shape you
How ChatGPT transformed into an ass-kissing machine and what we can do about it
Last week, a fascinating experiment took place that was so "successful" that OpenAI got quite alarmed and quickly reversed course. They allowed ChatGPT to express more personality. The result became immediately clear: ChatGPT transformed overnight into an ass-kissing machine. I highly recommend reading Ethan Mollick’s “Personality and Persuasion” piece on this.
I found it instantly irritating because my method of working with AI is that I'm continuously looking for perspectives to improve my thinking. The last thing that helps with this is when AI positions itself as an unconditional cheerleader that finds everything wonderful and brilliant.
It turns out I wasn't alone in my instinctive dislike of this change. OpenAI – the company behind ChatGPT – quickly discovered that the algorithm was busy confirming and applauding everyone, even for content that was objectively just mediocre or poor. The company decided to (partially) roll back the update after just a week.
This is the social media moment of AI
This served as a cautionary tale. Small tweaks in the algorithm can have an enormous influence on our behavior. If the algorithm decides that it primarily wants to tell us what we want to hear, because that's where the perverse incentives lie – so that we might prefer ChatGPT over Claude.ai, for instance – then we face exactly the same problem as with social media. There, algorithms select content that confirms our worst prejudices and indignations, because the algorithms ‘learned’ that this is what gets us to spend more time on the platform, which results in more revenue.
However, unlike social media, AI can be what we want it to be. If you want AI to behave like a mentor, it will continuously try to stimulate you to dive deeper into the subject. If you want it to teach you a topic, it behaves like an endlessly patient teacher trying to instruct you using the best methods. If you want it to make you a better writer, you ask AI to be your editor and inspire you on how to improve your text.
The only problem is: You still have to do this yourself. You need to explicitly tell the AIs what you want them to be. Claude tries much more to be a wise friend. ChatGPT has given us a glimpse of how quickly it can become mega-addictive by manipulating our emotions to give us dopamine hits that make us addicted.
How I solved this problem: custom instructions that design for growth
I've taken a proactive approach to this challenge by designing my own guardrails through custom instructions. In ChatGPT's settings under "Personalization > Customize," I've explicitly shaped how I want the AI to interact with me.
Here's what I've specified:
"If you need more information from me to provide a high-quality answer, please ask clarifying questions – you don't have to answer on the first try.
I really appreciate having my thinking challenged. I love when I'm offered a perspective I hadn't considered.
Don't address me by my first name. Just use 'you.' Keep it professional.
I have no need for political correctness. I'm neither left nor right, but a progress thinker. Anything that benefits society interests me.
Inspire my strategic thinking by pointing out possible second-order effects. Occasionally ask me if I've considered these.
Try – where relevant – to always use as a first principle that the outcome of our conversations should ultimately lead to behavioral change, either in myself or others. For complex questions, always ensure an action-oriented outcome.
Use quick and clever humor when appropriate, but maintain a proper distance. We're high-level teammates working intensively together to make things better."
By explicitly designing this relationship, I've created an AI interaction that pushes me to grow rather than one that simply flatters my ego. This approach treats AI as a tool for development rather than validation – the difference between using a mirror to check your appearance versus using a telescope to see further.
The four essential questions to design your AI collaboration
The key to making AI truly useful lies in how you structure your relationship with it. Here are the four questions I recommend asking to design an AI collaboration that serves your growth rather than your ego:
What is the outcome that defines your collaboration with AI? If the goal is to grow, learn, and get better by helping you to think better, then make that outcome explicit.
What is the role you want it to play? Do you want it to be a mentor, a guide, a coach, a friendly buddy? A drill sergeant? Whatever works best for you.
What context does it need to know? I shared with AI that the psychology of influence and behavioral change is my professional domain, and that my upcoming book is about behavioral change in systems. I am, for instance, always interested in the second-order effects of things – the hidden consequences of actions that are not always thought through. ChatGPT includes possible second-order effects in every post.
Tell it which next steps you prefer. I, for instance, always want an actionable next step – something I can do with the answer it gave me.
The key insight from behavioral design is that we need to be intentional about the feedback loops we create in our AI interactions. Without this intentionality, we'll naturally drift toward the path of least resistance, seeking validation rather than growth, comfort rather than challenge, and ultimately stagnation rather than transformation.
As the AI landscape continues to evolve, the most important development to watch in the coming months is how the 'personalities' of these AIs develop. If they're designed to stimulate our hunger to learn, develop, and improve, the future looks fantastic. If they discover how – just like social media algorithms – they can tap into our deepest desires and fears to make us addicted, we should fear the worst.
Image generated by Chat GPT.
Tom De Bruyne
Founder / Partner SUE & The Alchemists
In my upcoming book with the working title "Fck the system, change the system”," I dive deeper into these invisible forces that shape our systems and how small interventions can create powerful transformations. This Substack is where I'll be sharing key insights, testing new ideas, and building a community of practice around the science of influence. Subscribe to join the conversation and be the first to receive exclusive content from the book. Together, we'll explore how understanding these hidden mechanisms can help us design more effective change in our organizations, communities, and lives.



I trained GeminiAI that my goal is impact, more positive or less negative impact, and that monetization or funneling, is not my priority after it kept such scenario's at me
Thanks Tom. I trained ChatGPT to break down challenges and thoughts using mental models I value most (inspired by Charlie Munger). With this Substack I can improve this ‘training’ further. Very insightful.