C-Suite Confidential—Education Market Insights: Exclusive Interview With Dan McCallum, Chief Services Officer at Unicon
Thanks for the time today, Dan. I understand Unicon has an internal approach to AI that you characterize as “Be Curious, Be Careful.” Tell us the story of this guidance. Is this the same guidance you give to your clients?
Good question, and I’m happy to say the answer is “definitely yes.” For us, it’s just a really natural perspective to take on. Curiosity itself is one of our core values, and it almost has to be, even if we weren’t talking about AI. A technology-centric company like ours that builds software, that advises our customers on their technology strategy and decision-making—if we’re not curious, the alternative is that we’re going to become irrelevant.
With AI specifically though, looking back, it’s also gratifying to realize that by the time we needed to stake out an official, company-wide stance, we were, as leadership, just a bit behind where our people were at in terms of adoption and really seeing the practical value that generative AI was starting to bring to the table. And while that experience can be a bit uncomfortable, at the end of the day, that’s a great position to be in. When you’re running a business built on creativity and curiosity, what you want is for your best people to be so excited about what they’re doing and discovering that they're the ones pushing you to keep up. And I think with AI in particular—if you read people like Ben Thompson (Stratechery)—you’ll start to realize that its potential is so open-ended that both staying current and figuring out where the real value is has to be a bottom-up kind of process. Sure, guardrails are critical, and I’ll talk a little bit more about those. But if you think you’re going to build a successful AI strategy by prioritizing top-down approvals and centralized planning and risk elimination, at best I think you’re always going to be a few (big) steps behind.
On the other hand, it’s easy to go too far and let all that curiosity and enthusiasm get a bit out of control. And with AI in particular, that risk is amplified by a level of “fear of missing out” that is really difficult to understate. So this is where we like to pair “Be Careful” with “Be Curious.” What we want is for people to look at AI with the same kind of mindset that the best software engineers have always had when thinking about new computing technologies: as an interesting tool that comes with a unique set of tradeoffs. There are things that AI will be exceptional at achieving, but also problem spaces where it’s just absolutely the wrong choice, And particularly because of AI’s unique properties, it will generate second- and third-order effects—both good and bad—that will be difficult to anticipate but which we have new responsibility to grapple with.
In practice, we’ve found that the “Be Curious” part is much easier than the “Be Careful.” Because where you draw the line between “good AI” and “bad AI” is still very much a judgment call. Sometimes it’s easy. For example, when we’re setting internal use guidelines, a lot of that policy is pretty obvious: Only use services where you’re guaranteed prompt privacy, include AI attribution in creative deliverables, don’t trust AI output if you don’t know anything about the domain. But evaluating AI suitability for client projects is harder, and we’re really just starting to evolve a structured way of having those conversations. And a lot of that just boils down to classic goals-oriented planning. So we like to steer things away from “How can I build this on AI?” to “Why am I building this, and is AI the best way to solve that problem?” Interestingly, I think we’re starting to see the answer to that second question usually be “Yes, but not all of it.”
Of course, there is considerable hype at the moment suggesting that AI is a cure-all, but your observations show that isn’t true, at least right now. Do you think that will change?
I do think AI is going to be disruptive, but it’s still hard to know exactly how, and unfortunately far too much of the disruption discourse has a deeply unhelpful tone. I’m sure you’ve seen it. It’s all those people yelling at you on your social media feeds: “If you’re not doing X with AI, you’re already irrelevant!” I don’t think we invented the phrase “AI hype bros,” but I’ve noticed our people have started using it more and more as a pejorative label when they see this kind of nonsense. Which I think is great, and they’re 100% correct to dismiss it. AI is serious. Those people out there just trying to foment anxiety and build personal reputation are not. Their utility is limited to keeping a pulse on what new applied AI method or tool is gaining traction; beyond that, I’d tell you to ignore them. And if you hear rumors about your engineering teams bragging about “vibe coding” for anything other than prototyping, I’m going to encourage you to have some very pointed discussions with your technical leadership about where those rumors are coming from and why.
That said, I can also tell you that I have had a number of very long moments of professional pause in the recent past, where I have really started to wonder what the future holds for traditional knowledge work, especially for young professionals just trying to find a way to get a foot in the door. When I see people I respect and have followed for years—people like Noah Smith and Matt Yglesias—start to favorably compare AI services like OpenAI’s Deep Research to the output they’d expect from a professional researcher, I’m sorry, it is just hard, at a human level, to see that as an unalloyed good. Somebody is going to feel some pain; and unfortunately, but as always, there’s a good chance it’s somebody on the lower end of the power and wealth spectrum.
The way I look at this, though, aside from my fundamental long-term optimism, is that there are just certain characteristics of generative AI as currently manifested that mean a) it’s still best to understand it as an enabler rather than a replacer, and b) there are some things it will probably never be the right tool for. As an example, one of our architects recently gave a presentation on how his team is leveraging generative AI in an ambitious math courseware delivery system. His point was basically: It’s true that we’ve seen LLMs (large language models) make tremendous progress in how they handle mathematical prompts, but as the basis for delivering college-level math curriculum, until the LLM is 100% accurate, it might as well be 0% accurate. It’s just not the right tool. So his team doesn’t use the LLM for the math. They use it for the personalization, for the curriculum suggestions, for the functions that actually benefit from a bit of unpredictability and creativity, and can tolerate a bit of imprecision. So that’s a great example of the “Be Curious, Be Careful” maxim: Use the LLM where it makes sense, but don’t cram it into everything just because it happens to be there to be crammed.
You talk about AI as an “enabler” rather than a “replacer.” How can we protect against the rise of AI in digital learning undermining the value of human interaction, specifically in the educator-learner relationship?
I think the broad consensus is still that we’re not actually all that close to an automated teaching and learning silver bullet, whether AI-based or not. Expert human-human instruction is still the gold standard, and especially at very low ratios (“tutoring”), i.e., if resources permitted, we’d see incredibly positive impacts from providing 1:1 attention to all learners. And while you can certainly find papers claiming this or that seemingly impressive result from automated AI-based tutoring systems, I think if you go and try some of those systems yourself, you’ll find that they often just don’t pass the sniff test. I won’t call out specific platforms here, but I can tell you from personal experience that even some of the most well-funded AI tutoring systems are frankly underwhelming when you try to use them alongside somebody who is genuinely struggling with the underlying material.
I think it’s also important to keep in mind here that when we have historically talked about “tutoring,” especially in the human sense, we’re actually talking about instructional support that is secondary to the main classroom experience, i.e., “tutoring” is additive, not a replacement. I think some of the most interesting research going on in this area right now is actually around AI-based “tutoring co-pilots.” I like this approach because it zeroes in on what’s already working (human tutors) and tries to make it just that much better. Instead of attempting direct learner instructions and all the complexities that entails, the idea is to mediate the insights of the AI engine through the judgment and expertise of a human tutor. While serious economic challenges remain, as a theoretical approach, tutor co-pilots just make so much intuitive sense to me as the right logical, incremental step toward a truly valuable AI-enabled teaching and learning ecosystem. I think too many solutions are trying to skip straight to a utopian end state, which, frankly, might not ever be possible, let alone desirable.
Let’s talk about edtech products leveraging AI. They can’t all be doing it well, right? When evaluating a product that touts AI features, how can an institution distinguish between hype and substance?
Try it! You can usually tell pretty quickly.
Otherwise, I’ll point to Paul Ford’s $15 Volvo piece. He has a short list of very pointed questions to ask when you’re being pitched an “AI-powered” solution. His last one is particularly incisive: “Why wouldn’t I just do this myself if I can just tell AI to do it? What value do you provide?”
And, again, remember that the “generative” in “generative AI” is a key qualifier. LLMs can be borderline magical and absolutely an appropriate technical choice when creativity trumps precision. But if it’s the other way around—if precision and reliability are the top priorities—and you’re still being sold an LLM-based solution, you should start to smell hype.
Finally, I’d love to know how you stay updated on the most recent AI developments, given the pace of AI advancements in education. Who do you follow and what do you read?
Blog: Ethan Mollack’s substack (One Useful Thing). His recent “Speaking Things Into Existence” prompted one of those “long moments of professional pause” about the future of knowledge work that I mentioned earlier.
Book: It’s not strictly about AI, and I think it’s starting to show a bit of age, but I still think Cathy O’Neil’s Weapons of Math Destruction remains required reading on the downside risks of model-based decision-making in general. I also always like to recommend that people who either are or should be curious about how LLMs work (especially semi- technical leaders) take the time to sit down with a physical copy of Stephen Wolfram’s What is ChatGPT Doing ... and Why Does It Work? Some of his broad speculation at the end on the relationship between LLMs and human neurology is easily worth the price of admission.
Podcast: I’m going to pitch Ben Thompson’s Stratechery again. It’s not all AI, and some of the material is paywalled, but the tech industry analysis is just top-notch, and with the paid newsletter subscription comes a range of podcast options. For AI in particular, the best thing I’ve heard in the recent past, by a wide margin, is his late-February interview with Benedict Evans. (And I know you’re asking about podcasts here, but I’ll also plug Benedict’s annual presentations on tech trends. They are absolutely worth the time, especially his most recent “AI Eats the World.”)