While the last few months of 2025 seemed to be marked by questions around whether AI is a bubble, recent headlines sum up the new theme that has taken over: “‘The dark side of AI’: Wall Street weighs recent stock sell-off over disruption fears” from Yahoo Finance. “Tech Stocks Dip as AI Doubts Linger on Wall Street” from Bloomberg. “$300 Billion Evaporated. The SaaS -Pocalypse Has Begun” from Forbes. “Why the ‘AI scare trade’ might not be done” from CNN.
As these various articles explain, what started as unbridled enthusiasm for AI’s potential to drive productivity and growth in 2023-2025 has seemed to evolve into widespread investor anxiety about its disruptive power. Investors have begun pricing in the idea that AI isn’t just an enhancer—it’s a potential destroyer of established business models, particularly in software, but also rippling into other sectors or industries potentially at risk of AI automating jobs, commoditizing products, or eroding pricing power. As economist Ed Yardeni explains, the core fear is that large language models (LLMs) and agentic AI (systems that can act autonomously) are advancing so quickly that they could render many traditional services obsolete, and even unseat its own creators. For instance, AI’s ability to generate code, analyze data, or handle complex tasks at warp speed means companies might no longer need to pay for expensive software subscriptions or human-intensive services.
As we described in our recent Market Outlook, “Companies that sell infrastructure for AI applications, such as semiconductor and semiconductor equipment manufacturers, have performed extremely well over the past three years and according to data from Bloomberg, they have outperformed companies that use AI, such as software companies, by about 150%. Software companies have been plagued by an intensifying investment narrative that AI may harm or even obsolete them due to AI’s ability to write software code and lower entry barriers for new competitors.”
We wish we could make sense of it—better yet, we wish we had a crystal ball to understand how all of this will shake out. Alas, we don’t, and we’re left trying to make sense of investor behavior that has had us scratching our heads for weeks now.
As humans, we tend to want to know what we’re up against. There’s such comfort in feeling safe, and there’s such a psychology around being comfortable—it’s hard to imagine feeling overly anxious about a space you’re used to occupying with rituals you’re well accustomed to. It’s uncertainty and unpredictability that seem to really mess with us as humans. This feels like one of those times where the paradigm we’re accustomed to is shifting, and the narrative we’re being told is that it’s shifting at an unsettlingly fast pace.
This chart from Our World in Data is wild as a relative depiction of the timeline of technological advancement:

According to the article, it took our ancestors 2.4 million years to control fire and use it for cooking. That’s kiiiiiiind of a long time. Fast forward to 1903 when the Wright brothers took the first flight (they were in the air for less than a minute), and just 66 years later, we landed on the moon. Many people saw both within one lifetime the first plane and the moon landing!
That helps explain why artificial intelligence is such a transformative breakthrough because intelligence is so often the engine of innovation. The technological progress we’ve witnessed across human history could be dramatically compressed if it is propelled not only by human creativity and reasoning, but by artificial intelligence as well. It’s conceivable that advancements which once unfolded over millennia—and more recently over decades—could occur within just a year or two.
There have been wildly disruptive technologies throughout history, as the timeline shows. Even without looking at the timeline, I’m sure the computer would come to mind. And the internet. And smart phones. But what about the refrigerator? That one may have slipped your mind because of how easy it is to take it for granted today and how “simple” it seems in comparison. The World Economic Forum says that when we think of most disruptive technologies we think about the newest things out of Silicon Valley, but we almost take for granted truly transformative advents like refrigeration, which changed everything from the way people prepare food to what we eat to the hours spent on household tasks to women’s ability to join the workforce to farming practices to worldwide shipping to supermarkets! The modern global food system and its many layers couldn’t exist without refrigeration. It’s mind-boggling to think of how farmers’ jobs were impacted by refrigeration from the 1870s when refrigerated rail cars were invented through the 1920s when home refrigerators became prevalent, and it’s plausible to think that no one could fathom the job creation and evolution that would ensue.
It’s like the butterfly effect on a massive scale. According to the USDA, in the 1700s nearly 80% of the US population was farmers but by the 1900s the number was cut in half to just 40%. Today, the percent of American farmers in our country has decreased to less than 2%. But it doesn’t tell the full story of course to only consider that agriculture jobs were eliminated or altered—refrigeration is one of the technologies that continued to change farming (just like advents of the cotton gin, the grain binder, and later the tractor) by creating an entirely new global industry whereby food processing, logistics and long distance transport, and retail / super markets were a focus.
The idea of an AI-driven personal assistant type of agent in my ear isn’t a horrifying one to me. In fact, it sounds helpful. It sounds like a tool. It sounds like the equivalent of my remote car starter or the advent of Siri or Hey Google—conveniences that make my day easier and more frictionless. To think I could have something similar but smarter to save me time and resources so that I could spend more of my time doing what I want to do is an appealing one. I’m not seeking robot friends or less human interaction, but a tool to improve the quality of my life by giving me more time to spend on what matters? Sure, sign me up. We’ve signed up for that time and time again throughout history. Why does this time feel so different? Is it different? Is it dangerous to think This Time is Different, or does it behoove us to think that way? I don’t know.
I was a political science major in college, but my favorite classes I took were for a smaller focus area (known as a “cluster” at the University of Rochester) in Brain & Cognitive Sciences. Think of it as a cross between psychology, neuroscience, cognitive development, philosophy, and anthropology. It is basically a study of the mind and its processes and asks questions about human cognition. It was so much of what I have always found interesting and still find fascinating to this day. I will never forget one class I took about machine learning and consciousness. It was probably 2007 at the time, and the topics felt more science fiction and philosophical than anything else—it was unfathomable for me to imagine these very questions would become so practically relevant just a couple of decades later. I want to share some of the core ideas we explored in that class.
In 1950, Alan Turing proposed the famous Turing Test in “Computing Machinery and Intelligence.” Instead of asking “Can machines think?” he reframed the question behaviorally: if a machine can converse in a way that is indistinguishable from a human, we should consider it intelligent. The Turing Test (also known as the “imitation game”) involves three participants: a computer, a human interrogator, and a human “foil.” The interrogator asks pointed or complex questions of the other two participants (via keyboard and screen essentially) in order to identify/determine which of them is the computer. The computer can answer anything possible to “trick” the interrogator (for instance, it could say “No” when asked “Are you a computer?”). The actual human foil must help the interrogator to make a correct identification. Alan Turing’s conclusion, based on this test, is that if it behaves like a mind, it counts as a mind—categorizing him as a “behaviorist” whereby biology doesn’t play a role in human intelligence.
John Searle came along in 1980 with his own Chinese Room Argument that challenged Turing’s conclusion. Here’s the setup: Imagine a person who doesn’t speak Chinese is inside a room. This person has a large set of Chinese symbols and a rulebook in English that defines how to manipulate these symbols in response to questions written in Chinese. When a Chinese speaker outside the room sends in questions written in Chinese, the person inside the room uses the rulebook to manipulate the symbols and sends back appropriate responses. To the Chinese speaker outside, it appears as if they are engaging in a conversation with someone who understands Chinese.
Searle argues that, although the person inside the room can produce correct answers, they do not understand Chinese. They are merely following syntactic rules without any comprehension of the meaning behind the symbols. The point he makes is that computers operate similarly; they can simulate understanding language through syntax but lack genuine comprehension, a quality associated with consciousness and semantics.
Searle believes that computers are like the person in the room: they manipulate symbols syntactically (by formal rules) but lack semantics (meaning). Passing the Turing Test, therefore, does not prove real understanding or consciousness—Searle believes that biology matters and that genuine understanding arises from specific causal processes in the human brain.
Enter a third thinker, Steven Pinker, who explores the distinctions between human comprehension and machine processing. Pinker critiques Searle’s strong claim that machines cannot understand language. He posits that the problem lies in our understanding of “understanding.” He argues that the person in the Chinese Room may not understand Chinese, but the system as a whole (the person plus the rulebook and symbols) could be said to have a form of understanding, albeit different from human comprehension. He often frames his discussions in an evolutionary context, suggesting that human language and understanding evolved for specific social and environmental functions. He contrasts this natural evolution of human cognition with artificial constructs, implying that while machines can mimic behaviors, they do so without the evolutionary context that gives rise to human understanding. To this day, Pinker leaves room for further inquiry into the nature of consciousness and understanding, recognizing that significant philosophical questions remain open, particularly as technology advances. He encourages a balanced view—acknowledging both the capabilities and limitations of AI without letting philosophical challenges stifle technological progress.
Hopefully it’s clear how relevant the above experiments are to the AI debate today. What felt like sci-fi philosophical debate in college is now alarmingly real. Much of the conversation around AI and its possible limitations or applications alongside human beings in today’s society ties to these kinds of questions described above. And it begs so many more questions of which we’ve only begun to scratch the surface—touching ethics, purpose, and more.
This not only feels like a time of uncertainty, but it also feels like a really scary time to consider what the future might entail. As a parent, it feels especially heavy. It’s hard to fathom what the world will look like 5 years from now, let alone 25, when I’m hopefully healthy, retired, and reading to my grandbabies (but it’s easy to allow doubt to creep in about that). However, it’s not the first time that it’s felt like a scary time for people, and it won’t be the last either. When faced with uncertainty, what do we do? We control what we can control, and we find what’s knowable and act on that. We combine what we know with some faith or optimism, and we move forward. I find it helpful to remember that our parents and grandparents and great-great-great grandparents weren’t immune to uncertainty or fear of this magnitude—it is part of humanity in my opinion. So is a seemingly limitless capacity for resilience, adaptability, and purpose-seeking. And that’s the part I choose to stay hopeful about.
None of us knows how this all plays out, and anyone who purports to know should probably be viewed with a healthy amount of skepticism. At Howe & Rusling, we’re committed to being both open-minded and discerning in our quest to seek long-term opportunity and value for our clients from an investment perspective.
Disclosures: This material is provided for informational and educational purposes only and is not intended as investment advice, a recommendation, or an offer to buy or sell any security. The views expressed reflect the opinions of the author as of the date of publication and are subject to change without notice. This commentary contains general observations regarding financial markets, artificial intelligence (AI), and economic trends. It is not intended to predict future market movements or the performance of any specific security, sector, or investment strategy. All investments involve risk, including the possible loss of principal. Past performance is not indicative of future results. Market volatility, economic uncertainty, technological innovation, regulatory developments, and geopolitical events may materially impact investment outcomes. References to artificial intelligence, technological disruption, or sector performance are based on publicly available information and general market narratives. There can be no assurance that anticipated technological developments will occur, that markets will react as expected, or that companies perceived as beneficiaries (or at risk) will perform in any particular manner. Sector-specific investments (including technology, semiconductor, software, and infrastructure-related companies) may be subject to greater volatility and concentration risk than diversified investments. This material may contain forward-looking statements that reflect current expectations regarding future events, technological developments, or economic conditions. Forward-looking statements are inherently uncertain and are not guarantees of future performance. Actual results may differ materially from those expressed or implied. References to third-party publications (including Yahoo Finance, Bloomberg, Forbes, CNN, Bloomberg data, Our World in Data, USDA, World Economic Forum, or commentary from economists or academics) are provided for illustrative purposes only. Howe & Rusling does not endorse, adopt, or guarantee the accuracy or completeness of third-party information. Data and statistics are believed to be reliable but have not been independently verified. Quoted headlines are included to reflect prevailing market sentiment and media narratives and should not be interpreted as investment recommendations. This commentary does not consider the investment objectives, financial situation, or particular needs of any specific individual. Readers should consult their financial advisor, tax professional, or legal advisor before making any investment decisions. Historical examples of technological innovation are provided for educational context only. Comparisons between past technological disruptions and artificial intelligence are illustrative in nature and do not imply that similar economic, market, or employment outcomes will occur. Howe & Rusling is an SEC-registered investment adviser. Registration with the SEC does not imply a certain level of skill or training.


