A Raccoon’s Charcuterie of Perspectives
“Don’t be intimidated by what you don’t know. That can be your greatest strength and ensure that you do things differently from everyone else.” - Sara Blakely
In recent weeks, I’ve found myself caught in a whirlwind of conversations centred around DeepSeek and Artificial General Intelligence (AGI). My feeds are flooded with news articles, podcasts, and videos analyzing DeepSeek and various Large Language Models (LLMs). While we have unprecedented access to information, that doesn’t ensure we fully understand the whole picture.
I’ve dedicated hours to researching various LLMs, evaluating their capabilities, and gathering differing perspectives. Rather than contribute another analysis to the conversation, I'd like to shift focus to how we process information and the importance of maintaining an open mind now more than ever.
One of my intentions this year is to actively engage with perspectives beyond my usual sources. That means reading literature or media that challenge my assumptions or come from unexpected places. I had to practice this rigorously while writing my doctoral dissertation, and I’ve found it to be one of the best ways to sharpen critical thinking. But let’s be honest—staying open-minded isn’t always easy.
Seeking out different perspectives doesn’t mean passively consuming every piece of information we come across. It means engaging thoughtfully, asking questions, and evaluating credibility. Some of the most insightful moments I’ve had in the last year came from reading perspectives I initially disagreed with.
For example, when researching AI regulation, I noticed two dominant camps: those advocating for strict government oversight and those warning that heavy regulation could stifle innovation. At first, I leaned toward one side, but after reading expert analyses from both viewpoints, I realized the issue was far more nuanced than I had initially thought. Critical thinking isn’t about choosing sides—it’s about navigating complexity.
Take another simple example: The last time I searched for new dinnerware, my social media feeds instantly flooded with ads from different brands. Before I could consciously pull myself out of the rabbit hole, I was getting discount offers left and right. Sound familiar?
Our digital experiences are shaped by algorithms designed to capture our attention and reinforce our preferences. If you primarily read tech-optimistic takes on AI, your feeds will serve you more of the same. If you engage with AI skepticism, you’ll see more of that. This creates an illusion that “everyone” thinks a certain way when in reality, we’re just being served a curated slice of opinions.
One thing I’ve started doing is deliberately following thought leaders with opposing viewpoints. I ask myself: Am I exposing myself to the full spectrum of ideas or just reinforcing what I already believe? If we want to think critically, we need to break out of our own echo chambers.
A question I’m often asked is: What are the most significant barriers to implementing AI in long-term healthcare, and how do we address them?
Introducing new technology—be it in an industry, organization, or personal sphere—demands change. This change typically provokes mixed reactions: while some embrace it, others resist, and many are left feeling unsure. Nevertheless, one thing is evident: effective change management hinges on robust communication and education.
Each stakeholder brings their own experiences, fears, and expectations to the conversation. Caregivers may worry about job security, administrators focus on cost and efficiency, and families are concerned about human connection and data privacy. These perspectives are all valid, and if we fail to consider them, we risk designing solutions that only address part of the problem.
I’ve seen firsthand how different stakeholders react to AI-powered digital companions in long-term care. Initially, caregivers were wary, assuming AI would replace their roles. However, when they saw how it could reduce repetitive administrative tasks—freeing them to focus on meaningful human interactions—their perspective shifted. The difference? Transparent communication and an openness to different viewpoints.
Like any technological transition, AI advancements bring both advantages and unintended consequences. But at a broader level, competition in the AI space fosters innovation, pushes boundaries, and raises standards. Ultimately, we all benefit from having more choices—but only if we engage critically with the information surrounding these choices.
If we only listen to voices that confirm our existing beliefs, we risk creating an AI-driven world that reflects narrow viewpoints rather than broad, inclusive thinking. The best innovations emerge when people challenge assumptions, seek diverse insights, and make informed decisions.
When was the last time you intentionally sought a perspective that challenged your own? I’m curious to learn from your experiences.
Until then, stay curious, lead with passion, and inspire others.
-Dr. M-