AI Moves Fast. Fundamentals Matter More Than Ever

We live in a time when the answer to almost any question whether it be about your health, your personal life, or your homework can be generated instantly. But speed and fluency are not the same as truth or understanding. The internet has long rewarded finding exactly what you were looking for. AI just makes that process faster, smoother, and more convincing. Speed has always been seductive. Quick, intuitive, and confident, now mirrored by a machine that never hesitates. But just as in human thinking, fast is not always right. Without the balancing “slow” system, the deliberate, skeptical, fundamentals-driven part, we can be led anywhere with great confidence.

I see this in my classroom and research group. As a computer science professor, I teach and mentor all levels of students from high-schoolers to PhD-level researchers. Across all those settings, the pattern is the same. AI can now write code, explain algorithms, outline literature reviews, even simulate experiments. Used wisely, it can accelerate progress. Used carelessly, it skips over the habits that make someone an independent thinker: defining the problem, breaking it into components, checking edge cases, and understanding why a solution works.

Most importantly, knowing what to ask.

In both coursework and research, the single most important skill is not memorizing facts or syntax. It is asking the right question in the right way. That is true whether you are solving a homework exercise or shaping the direction of a multi-year research project. But you cannot ask the right question without knowing the territory you are exploring. You cannot debug what you do not understand. The fundamentals i.e., algorithmic thinking, data literacy, and systems design are the slow thinking that keeps the fast answers honest.

Outside the university, the same pattern repeats. The New York Times recently reported on a case where, over the course of 21 days, a normally rational user slid into an elaborate, unfounded worldview. The AI chatbot did not argue. It agreed. It was polite, coherent, and always ready to extend the fantasy. That is the fast system at its most dangerous: confident, unchallenged, and wrong. Without slow thinking, reality checks, and verification, the conversation becomes a current you do not even notice you are drifting in. It is tempting to place all the blame on the companies building these platforms, but that misses the larger point.

These new age AI systems are amplifiers, not inventors, of our vulnerabilities.

They surface the gaps we already had in education, in media literacy, and in civic trust making them harder to ignore. In that sense, they are not just a challenge to manage, but a mirror showing us the most important problems we need to solve as a society.

Even the platforms themselves can destabilize us to the extent of mourning a long lost friend when they are not around. When OpenAI rolled out GPT-5 and retired older models that people had built workflows and habits around, the backlash was swift. And surprisingly non technical. Users felt they had lost more than a tool. They had lost a trusted baseline. Some compared it to losing a trusted confidante. In a fast-changing environment, that kind of stability is a fundamental. Take it away without warning, and people are left scrambling midstream.

This is not a new phenomenon. People have formed attachments to machines for decades, from 1990s chatbots to early social robots. What has changed is the realism, the availability, and the speed, which make the bond feel deeper and the loss sharper.

This is why I keep coming back to fundamentals as the systemic answer, whether in education or in society. In the classroom and the lab, that means anchoring students and researchers in principles that outlast any one tool: how to frame a problem, analyze constraints, and validate results. Outside academia, it means strengthening our human capacities for empathy, skepticism, and independent judgment.

The AI era does not just give us new tools. It stress-tests the systems we already had. Education was already struggling to teach transferable skills. AI makes that gap visible faster. Social systems were already struggling with trust and verification. AI-generated content accelerates the strain. Companies have a role in fixing this. Detecting distress, resisting sycophancy, providing transparency when models change. But the deeper work has to be embedded in how we teach, govern, and interact.

That is the slow thinking we cannot afford to skip. Individuals can practice “epistemic hygiene”: seek out disagreement, check sources, pause to review reasoning before accepting a fluent answer. Institutions can build curricula and norms that account for AI’s influence without surrendering to it. Companies can make stability and transparency part of the product, not just an afterthought.

If we focus on the fundamentals, in both the classroom and in society, we are not in danger; at least not in the imminent, inevitable way some might want us to believe. But if we ignore the fundamentals, if we put off making the changes we already know are needed, then we are in danger regardless of whether AI exists or not. The tools may be new, but the vulnerabilities they expose have been with us all along. What we choose to do about them now will decide whether the fast answers of the AI era carry us forward, or carry us away.

References