• This post is not going to focus on AI. It’s probably going to be more political than I would have previously thought my writing here would be. But I don’t have a problem with that, because in this post, I’ll be arguing for a political philosophy that every American should cherish and work hard to keep extant in our political landscape: Liberalism.

    What is Liberalism?

    Liberalism is a political philosophy that emerged during the Enlightenment, when John Locke published his Two Treatises of Government in 1689. In this work, Locke argues against the divine right of kings to rule, and against the power of feudal lords who’s fiefdoms dotted the pre-Industrial Revolution world. He instead argues for things like consent of the governed – making sure people have a voice in the way they are ruled, and for Natural Rights – rights to life, liberty, and property that exist before any political consideration. Liberalism laid the foundation for liberal democracy, the American Revolution, and set the United States on a path to becoming the richest, most prosperous country in the world. It also inspired our collective journey to be the freest country in the world.

    We’ve made varying amounts of progress on each of these axes, depending on how they are measured. We have the highest GDP in the world, and rank 10th in GDP per capita. We rank 61st in life expectancy and have far worse income inequality than many other developed countries, with that inequality exacerbating greatly in the past three and a half decades. What I hope to show with these statistics is that we are indeed on a journey in the United States. It hasn’t always been pretty (open any history book) and progress can seem haltingly slow (look around you in 2010, then in 2025 – you’ll notice not much has changed) but a commitment to Liberalism means that we can continue that journey. I fear we are in danger of losing that commitment and I aim to defend it.

    The Evolution of Liberalism in the United States

    There are many flavors of liberalism, too many for me to fully dissect here. Based on some rudimentary research that I’ve done to orient myself for this post, the main characteristic that’s up for debate within liberalism is whether the role of the state acts to suppress these Natural Rights (Classical Liberalism) or to enable these Natural Rights to exist (Social Liberalism).

    If I had the desire, and the actual expertise, I could probably write an entire book on this, but I’m sure others have done it better than I could. Instead of doing that, I’m going to show you a graph of how I think the Democratic Party has treated it’s commitment to liberalism over the years. Special thanks to my co-author of this chart, Claude Opus 4.5.

    Evolution of American Liberalism

    Evolution of American Liberalism

    Tracing the Democratic Party’s ideological journey from classical liberalism through social democracy to neoliberalism

    State Intervention in Social Welfare → Commitment to Individual Liberty → Low High Low High Classical Liberalism Social Democracy Authoritarianism Paternalism 1776-1860s 1870s-1900s 1900s-1920s 1933-1945 1964-1968 1980s-1990s 2009-2016 2020s
    Founding Era
    Gilded Age
    Progressive Era
    New Deal
    Great Society
    Neoliberal Turn
    Obama Era
    Current Moment

    Key observation: Note the leftward shift on the X-axis (state intervention) from the Great Society peak (1960s) through the Neoliberal Turn (1980s-90s). The Democratic Party maintained high commitment to social liberties while retreating from robust welfare state expansion toward market-oriented policy solutions.

    Liberalism vs. Authoritarianism

    My main motivation for writing this post is the horrific killings of both Renee Good and Alex Pretti. There have been many things that I’ve been disgusted by during the Trump era, but these incidents were particularly horrific to me because they showed that tribalism and the politics of retribution and punishment have reached levels that typically precede extremely bad times ahead. I’m still working my way through Slouching Towards Utopia and it’s not lost on me the parallels between what we are seeing today and what Germany saw in the 1920s and 1930s. The emphasis on punishment of the “other”, the questionable adherence to the rule of law, and the consolidation of power across aspects of government. These are not ingredients for a stable, liberal democracy. We can decide if the Trump years are a decade-ish of departure on our rocky journey as a polity, or if it represents a fork in the road that leads us into an authoritarian future.

    I don’t want to be the one who continually blows the “descent into fascism” whistle, because I don’t think it’s particularly useful to jump to those sort of extremes, and is easily cast as histrionic by people who do not agree. But I also think we have been frogs in a very large cauldron, and the temperature has been turned up every day for a dozen years. It’s kind of hard to beat the authoritarian-curious accusations when you are, 6 years later, still trying to come up with a fake story about how you actually won an election.

    The killings of both Good and Pretti have also shown that there are cracks starting to form in the Trump coalition, which gives me hope that there are swathes of voters who are open to examining their own loyalty/fealty to Trump. It doesn’t give me hope though that many elected Republican officials continue to be yes men and foot-soldiers. My hope for these officials is that they realize that Trump will not be in power forever, and they will no longer be solely judged by their obedience to him.

    Liberalism and the Fourth Industrial Revolution

    So what relevance does a commitment to liberalism have as it relates to the age of AI? Well, for starters, evolutions of liberalism have often coincided with, or been forced by, changes in the economic landscape that necessitated new thinking around the roles that different entities play in society. The progressive era was a response in liberalism to the fact that the Second Industrial Revolution, beginning around 1870, created lots of new problems that the state was best positioned to address. Workers’ rights, anti-trust regulation, and the beginnings of the social welfare state in ex-US countries were all borne out of the rapid industrialization of that era. Surely child labor laws reduced the amount of factory output that was possible, but that seems like a pretty good trade to me.

    I think the US got fairly lucky with the Third Industrial Revolution, which isn’t as widely recognized as the first and second, but I equate with the digitization and miniaturization of the economy, starting in about the 1950s. At this point, much of the world was still rebuilding from WWII and the US was a major manufacturing hub. China, a global superpower today, was very much a developing nation. As such, there was not as significant of an effort towards or need for large scale economic reform – the US was doing pretty well economically, and the burst of technology was making lots of cool products (TV, automobiles, personal computers) widely accessible, and these products made life materially better for the population. The Civil Rights movement and the Great Society of the 1960s expanded both civil liberties and the welfare state in ways that have paid enormous dividends over the last 60 years.

    However, the relative stagnation and economic uncertainty in the 1970s lead to the a new kind of liberalism – neoliberalism. Neoliberalism is akin to a modern reimagining of classical liberalism, promoting free trade, free markets, and a more limited governmental role in the economy. By many measures, it was successful – our GDP grew enormously, the world at large became a global marketplace, and the proliferation of consumer products and technologies that we all enjoy today were accelerated, if not enabled, by this turn. It also resulted in a decoupling of wages from productivity gains and severely exacerbated income inequality.

    Currently, we find ourselves in a situation that I believe, at least in part, is due to a failure of neoliberalism to effectively bring the entire populace along for the ride. I think Trump 2016 was an early mainstream indication that there was trouble brewing – people were dissatisfied with the status quo and they showed it at the ballot box. I still think most people are dissatisfied with the status quo, which might lead to a pendulum swinging back and forth electorally. I think that the Democratic Party has fallen into the trap of corporate-pleasing-GDP-maxxing-neoliberalism. I don’t necessarily begrudge the thought process behind this – for most of human history, if you made a society richer, you made a society greater. I think that trend has broken and needs to be cast aside as the main motivation in governing.

    What I want to see is a Democratic Party that is focused on making society greater. Focusing on human thriving, not GDP growth. I also want to see a Republican Party that is focused on making society greater. There might be very different ideas on how to get there, but I hope that this common goal, of a democratic, liberal society, is what follows this period of upheaval in American politics.

    On the eve of the fourth industrial revolution, this is uniquely and specifically important. I came across a really timely tweet today that eloquently summarizes why.

    Ethan Mollick (professor focused on AI at the University of Pennsylvania) is correct in that these previous periods of history have worked out pretty well in the end. There are lots of factors about the AI industrial revolution though that will make it even more challenging. The speed and breadth of diffusion of AI will be faster than the physical goods that needed to be shipped port to port in previous revolutions. The scope of the economy that the technology will impact will be broader than any technology we’ve seen previously (knowledge work first, then physical work.) And the fact that AI is positioned to possibly be able to actually substitute for labor itself means it threatens one of the only two bargaining chips a polity really has to make an impact in society.

    If we lose both our ability to use our labor and our votes as bargaining chips, by slowly sliding into anti-Democractic/authoritarian-curious governance, then We The People will be at the mercy of whomever is in power when that comes to pass.

    The Third New Deal

    Americans yearn for a political entity that will fight for them. Right now, the two major parties, at least where the concentration of actual political power lies, are both corporatist parties. Trump won on the promise that he will fight for the average American, then proceeded to plunge the country, and maybe the world, into a prolonged period marked by division, incompetence, and malice. Democrats largely, either by being true believers or being too afraid to upset the lucrative ecosystem of the affluent political class, have remained mostly aligned to a GDP-maxxing viewpoint.

    A new strategy is needed, one that will break the spell of both Trumpism and Corporatism. Nearing the centennial of its predecessors, a successful candidate in 2028 will propose a Third New Deal to the American people. One that is focused on maximizing the flourishing of the citizenry of this country. I’m not sure what the exact tenets of this would be. It’s going to be hard to balance appropriate state action that protects and improves the wellbeing of the people of the United States without hampering innovation in the age of AI. But meaningful pursuits are usually difficult. If we don’t try to address the societal frustration and malaise that’s manifested in recent memory, we will simply be kicking the can down the road. This might lead to further erosion of the social contract, further fracturing in society, and open the door for even more extreme politicians to take power in the future.

    When you pair this corporatist attitude that’s broadly applicable across the political spectrum today with the idea that we are basically betting the success of the global economy on the success of AI, you get a very volatile mix with a wide range of potential outcomes. I don’t want the future of the human species to depend on the benevolence of a yet-to-be-determined CEO of THE AGI COMPANY. You shouldn’t want that either.

    Worthy Opponents

    I’m not trying to hide the ball on where I stand politically. I’m not even trying to convince classical liberals to adopt my viewpoints. What I am trying to do with this post is to defend and preserve liberalism, to encourage everyone to weed out the roots of authoritarianism, even if they occur on “your side” of the aisle. There are few things worth fighting for more than for our ability to live in a liberal democracy. It’s not perfect, but as we descend on our 250th birthday, we must remember that we’ve only got a republic if we can keep it.

    A note on AI use: Here at Clearly Intelligent I’ll be adopting a scale suggested by Seb Krier that explains how I use AI in generating my posts. I’ll file this under about a 2 on this scale. I used AI to do research on liberalism and create the chart within the post. The opinions herein are mine, and mine alone.

  • In 2026, AI will undoubtedly continue to be a huge part of our social, economic, technological, and political progression as a species. Here are a few things I think we’ll see in 2026 in the world of AI and elsewhere.

    Autonomous Driving Accelerates and Runs Into the Real World

    Waymo is planning to aggressively expand its areas of services throughout 2026 and beyond. Buoyed by impressive safety data and growing (albeit slowly) acceptance of autonomous driving, they will continue to increase their YoY miles logged and cover more geographic areas than they previously have. I’m very bullish on autonomous driving and think ideally designed and deployed autonomous driving systems will be widely available to riders in the coming decades and will dramatically reduce (>95% reduction) road fatalities once we hit the point at which a human is no longer necessary to drive the vast amount of miles logged in this country.

    However, my prediction for 2026, is that Waymo will see its first rider fatality. My hunch is that it will actually be the fault of the human driver in the other vehicle in the accident (i.e. a blind merge/lane change on a freeway), but it will happen nonetheless. At some point, the law of averages dictates that it must. I hope that I’m wrong, but we know that Waymo has been preparing for that moment whenever that first fatality does happen.

    What will the public think when this accident happens? Judging by other cases where fatalities (both human and feline) have involved autonomous technology, they will receive outsized media attention compared to a run-of-the-mill road fatality. Safety stats make for a boring story – a first of its flavor tragedy does not. Autonomous driving, like many other new technologies, will have skeptics that point at edge cases encountered in the real world, like when Waymos were more or less rendered inert during a PG&E outage in San Francisco in late 2025. These considerations are important – the world is messy! It’s full of edge cases. But they should be weighed appropriately against the preponderance of evidence of current road experience (excessive speeding, impaired driving, texting and driving, sexual assault by Uber drivers) to adequately assess the pros and cons of bringing a new technology into the real world.

    Chatbot Wars Intensify

    OpenAI’s ChatGPT is still by far the dominant market leader in consumer AI applications. Due to many factors, including explosive early adoption, rapid early iteration, and the very public personas of OpenAI’s leadership, ChatGPT is the “Kleenex” of AI chatbot applications today. Gemini will continue to increase its marketshare, as Google’s renewed and intense focus on consumer facing products and AI technology was an important 2025 trend. Expect lots of Super Bowl ads for AI again, really leaning in on the chatbot experience and form factor. It is still relatively early in the chatbot game, and I think that other than for superusers, the switching costs are still low. If Google can show that Gemini’s capabilities are on par with ChatGPTs and thread the needle of more seamlessly integrating into G-suite and Android phones (something I’m admittedly a complete novice regarding considering my pseudo-religious devotion to the Apple ecosystem) I think they have a real shot at taking a bite out of ChatGPT’s marketshare.

    Apple Kills Siri – But “Phoenix” Rises from the Ashes

    Apple has been one of the weirdest players in the AI space. Over the past several years they have overpromised on features that never shipped or made AI a footnote in keynote addresses. I don’t necessarily blame them for it – they have continued to perform well both in the consumer marketplace and in the stock market and probably actually have made a wise decision to let the other companies duke it out in the capabilities war. Apple as a company has long had a corporate philosophy of not necessarily doing it first, but doing it best. As someone who thinks a lot about the creative process of product design, the thing that is most important to me when considering creating something new is to truly understand what the problems are that your product is going to solve for. Right now I think it’s the lack of well-defined and well-specified problems that are keeping AI adoption relatively low when you consider the actual capabilities of these tools. My bet here is that Apple retires Siri, sending it respectfully afloat on a barge, down the digital river to join other storied relics of the past. They use their fall keynote to launch Phoenix, Apple’s Ambient Intelligence.

    Ambient Intelligence is a Key Theme in Consumer AI

    An ambient intelligence, one that lives with you, hears what you hear, sees what you see, and knows what you experience in the world, would be capable of solving the problems that you encounter on a daily basis. Imagine a personal Clippy that rides around on your shoulder, adding to your to-do lists, ordering Ubers when you have a reservation, surfacing that ticket in your email as you arrive at a concert. This kind of ambient intelligence will make your life more frictionless. This is my bet for the consumer product category in 2026. Whether it be pens, pendants, pucks, or phones – ambient intelligence is coming. A truly personal assistant that takes the annoying things off of your plate and positively adds to your life.

    This vision is probably going to have a hard time being accepted outside of techo-optimist circles. There’s going to be a very thorny set of social norms to navigate in a world where omnipresent recording devices are now just not within phones, but in peripheral accessories as well. What does obtaining consent for recording look like? How is that data stored? What about separating personal and work data, much of which comes with far stricter rules and regulations? I think there’s going to be significant cultural and social pushback on further fortifying the distributed panopticon that our modern world already represent. Add to that the general public’s distaste for AI and it’s a receipt for backlash and ridicule.

    Utility over Capability – Benchmarks Fall Out of Favor

    LLMs are going to saturate most benchmarks by the end of 2026 and I think they will no longer be a great measure of an LLM based AI tool. LLMs have made remarkable progress on human created benchmarks to date and I think that progress will continue. We will see intelligence per dollar across these benchmarks continue to fall, as smaller models improve and better computing is available, but the world starts to shift in favor of evaluating “utility” over “capability”. These means a lot, and at some point will probably be its own blog post in the future, but mark it down here – increased focus on “what have you done for me lately” not “what could you do in a theoretically perfect scenario.”

    White Collar Job Growth Will Be Non-Existent

    Slightly related to the capability vs. utility distinction in the preceding prediction, I think we see a continued holding pattern in the white collar world. C-suites all over are ecstatic about the promise of AI tools, wooed by AI industry players taking showcasing capability and promising utility. Huge, multinational organizations are going to be very wary of bringing in new workers to onboard and train if there’s a potential to automate their tasks in the next few years. While many companies may be using AI as cover for layoffs that occur for other, more traditional business reasons, I think it’s likely that organizations generally will take the stance of trying to keep teams relatively lean as they wait for the AI aha moment in their specific industry/use case.

    AI Will Be a Major Issue in the 2026 US Midterms

    At a time where a historically unpopular president already appears to be in his lame duck era, and has gone all in to appease the AI industry, it’s very likely that Democrats take a decisively anti-AI stance. It’s going to be the politically popular thing to do. I want Democrats to win back power and position themselves well for 2028, so I’m not going to necessarily begrudge them for harnessing popular sentiment and pointing it at the current punching bag.

    Good politics doesn’t always mean good policy. What I would like to see is this issue framed like is focusing on building technology the works for us and increase our ability as humans to thrive, not technology that simply increases the valuation of a handful of trillion dollar companies. That’s a paradigm shift though that’s complicated and requires a lot of focus and effort – not something we are going to want to tackle in the 11 short months before these interim elections.

    There will be lots of angles from which AI enters the political debate this year – nonconsensual AI image generation, data center construction, electricity prices, job displacement are just a few. I don’t think the population is in a receptive mood to the idea that “if we just go all-in on AI, the future will better.” Technology has done an incredible amount of good for human civilization, but we find ourselves in a precarious world currently, where we are chipping away at the last few percentage points of tangible improvements to our lives that entities are financially incentivized to go after, while majors problems like hunger, disease, violence, and inequality remain.

    I want us to choose to use AI to help us maximize human thriving. We should be skeptical that a technology that has already provided outsized benefits to such a small percentage of people and corporations will magically enable to solve all the issues we face on this planet. The upcoming midterms will be a preview of the 2028 election, where I believe AI will be the single biggest issue.

    AI Achieves a Scientific Breakthrough

    And for now, my most positive prediction: 2026 will be the year where AI for science goes mainstream within the scientific community and aids in/discovers a scientific breakthrough.

    We are around the time when then capability level of tools combined with a greater acceptance within the scientific community of integrating these tools into their work may be capable of producing something genuinely novel. Perhaps it’s a novel drug discovery, a math conjecture, or a materials science problem – all areas of knowledge with huge search spaces that are better approached with the intellectual horsepower of AI systems.

    I don’t think that the Nobel Prize will be awarded (at least in part) to an AI system quite yet (I think that will happen around 2028-2029), but I think scientists are really starting to understand the parts of the scientific process that are amenable to AI tool inclusion, and are going to be better able to deploy this familiarity with the tools in an experimental setting.

    Predicting, Updating, Iterating

    I am really excited to put out my first formal predictions post. I want to hear what you have to think about these predictions and some of your own predictions. Putting a marker down at a specific point in time feels a bit treacherous, but I’ll be happily updating and iterating on these predictions over the years. I’ll also revisit this in December 2026 for my first prediction self-assessment. Subscribe if you want to make sure you track these through to the end.

    Updated January 10th, 2026 – Added in an article that posits that AI is being used as cover for layoffs rather than AI actually replacing human labor. Kudos to my good friend Dr. Paul Hook for sending along this piece.

  • AI Attitudes – What do people think about AI?

    It’s been awhile (~2 months) since my last post and a lot has happened in the world of AI. The White House unveiled America’s AI Action Plan which has generally been received warmly by parties across the political spectrum. Waymo and Tesla have expanded their self-driving offerings and promised additional cities later this year and into 2026. The long anticipated GPT 5 was rolled out – and it was not the smashing success that OpenAI had promised and hoped for.

    Within the nonstop barrage of technical reports, hype-baiting, and doomerism that you can find on X and other places that aggregate AI news, there was an absolutely fascinating report published by Seismic Foundation entitled “On The Razor’s Edge. Seismic Report 2025. AI vs. Everything we Care About”. This report, more than anything else that’s come out since my last installment, has taken up most of my mindshare. This post will attempt to highlight what I think are some of the most interesting, surprising, and important findings that this report uncovered, and what I think it all means for the future of AI and its impact on society.

    AI vs. Everything We Care About

    I’ll start by noting that the framing of “AI vs. Everything We Care About” is a bit confrontational, but I don’t think unwarranted by what was found during the course of the study. I’ll be pulling charts from the full PDF of the study, and encourage you to download it and read through it.

    Let’s start with the key findings.

    Key findings from Seismic Foundation 2025 Report. (Page 5 of full report)

    01 – Less than 1 in 3 see AI as a hopeful development for humanity.

    This finding, especially for a skeptical but optimistic AI acolyte like myself, surprised me. When considering the rate of progress in the AI space, the promise it has for accelerating drug discovery and streamlining healthcare, and my individual use of AI tools, I see lots of ways for AI to enhance and contribute to humanity. This finding makes much more sense when paired with the chart below, which plots the salience/importance of certain issues (how big of a problem people think these issues are) and if they think AI will make a positive or negative contribution in these areas.

    Issue salience vs. AI’s ability to help – AKA how big of a problem is this, and will AI make it better or worse? (Page 9 of full report)

    When this chart accompanies the viewpoint that AI is not a hopeful development for humanity, it makes the key finding much more coherent and grounded. Out of the 18 issues specified to the survey respondents, only three were seen as issues where the use of AI would improve those issues – Healthcare, Climate Change, and Biosecurity and Pandemic Prevention. The other 15 issues, which make up a substantially larger volume of importance than the positive side of the ledger, are identified as areas where AI will exacerbate the issues, rather than positively contribute to them. That less than a third of people are “AI hopeful” makes a lot of sense when you see the considerations of the respondents quantified in this manner.

    02 – 1 in 2 see AI as a growing problem

    This finding was not very surprising to me. In fact – I think it should probably be higher! I think we will see this number continue to tick up in subsequent surveys around attitudes toward AI. There are a few interesting nuggets in the report that shed light on the challenges of truly understanding and summarizing opinions when using broad, and unspecific language such as “Do you think AI is a problem?”

    The following two charts illustrate exactly why framing attitudes towards AI is so challenging.

    Understanding how big of a problem the use of AI is today across different countries. (Page 11 of full report)

    A majority of respondents across each country surveyed think that the use of AI is a big problem, either moderately so, or very much so. This might indicate that there is a lot of collective mindshare being dedicated to thinking about AI. But when you compare AI to a list of other problems, that doesn’t seem to be the case.

    “Big Problem” is quite relative. (Page 8 of full report)

    As you can see by this graphic, even though more than half of respondents identified the use of AI as a big problem, it ranks lowest on this list of other problems. I expect over time that similar surveys will show the use of AI growing as an area of concern, and making its way up the list. It’s also important to note that the pervasiveness of AI will likely grow in the future and intertwine with all of these other issues. Time will tell if it will have a positive or negative contribution to the remainder of the issues on the list, but the chart previously covered shows where the public’s attitude is regarding this.

    03 – 3 out of 5 people are worried about AI replacing human relationships

    I’m very unsurprised with this finding. I think we’ve got some really interesting evidence that this is already starting to take root. The rollout of GPT-5 earlier this month was less than stellar for many reasons, but one of the most surprising reasons had to do with the removal of the model routing feature. Basically, until GPT-5, a user was able to select a model from a dropdown menu of models to conduct a chat conversation with. GPT-5 introduced automatic model routing wherein the application itself selected the model best fit for the task (and some have speculated to cut down on inference cost). This change created a maelstrom of backlash – users begged for the return of the 4o and 4.5 models and OpenAI complied swiftly. Why would users want to use a “less capable” model? In many instances, it’s because the users developed a rapport or relationship with the model, and losing that model felt like losing a friend.

    I’m not making an ethical or moral judgment on the users that felt this way. I think it’s important to understand that people are already developing relationships with AIs. Mark Zuckerberg has made explicit admission that he believes that there is a market for artificial companions because the average person desires more friends than they actually have. The speed with which people are developing these relationships, and the fact that they are developing them with primarily text-based interfaces is pretty surprising to me. I think this issue gets a whole lot thornier and more entrenched when there are avatars or photorealistic characters that attach themselves to the users. You can count me firmly in this 3 out of 5.

    04 – 7 out of 10 of the public agree that AI should never make decisions without human oversight and that humans could keep control

    This one is a bit trickier for me to nail down my perspective on. I’m probably a “no” on the first part of the finding and a “yes” for the second part. I’m reading this finding very literally and I don’t think there are many scenarios where never is the appropriate descriptor. Take for example AI systems’ use in healthcare. A February 2025 op-ed in the New York Times by Drs. Eric J. Topol and Pranav Rajpurkar discusses a research review that found in some instances AI tools outperformed not only doctors, but also doctors using AI. This was a surprising and somewhat counterintuitive finding. As AI capabilities continue to improve and broaden their domain expertise, I would assume this gap will widen. But just because AI tools might perform better does not mean that patients will want the human removed from the loop. Attitudes and opinions about AI (and everything else for that matter) are about much more than factual information and often rely heavily on emotions and past personal experience. This might seem obvious to many people, but I don’t think it’s as obvious to builders within the AI community as it is to the general population, which drives a lot of the divide.

    05 – More than 1 in 2 of the public are deeply worried about AI risks across all markets

    “AI risks” is another one of those terms that I think is so overly broad that it’s difficult to nail down what people actually think using that language specifically. You could easily make the case that the chart previously shown detailing out the public thinks that AI will likely make most issues worse, as a set of “AI risks.” The survey also dug a little bit into more standard “AI risk” territory when it asked respondents to describe how worried they were about specific uses of AI.

    Attitudes towards AI risks. (Page 22 of full report)

    These areas, which include bioweapon development and AI developing agency and goals that conflict with human values, are more in line with the traditional modes of thinking about AI risk. I think it’s encouraging that people are worried about these risks, and it’s nice to see them concretized further than a general “AI Risk” bucket.

    In September, Eliezer Yudkowsky and Nate Soares are publishing a book entitled If Anyone Builds It, Everyone Dies – a tome dedicated to the existential risk posed by artificial superintelligence (ASI). I’m not sure what their media and PR strategy is going to be – I wouldn’t be surprised to see them on the Today Show or CNN trying to convince people to read the book and take their viewpoint seriously. I think the reception of this book, and the presentation of its authors in the broader, non-AI focused media, will be a good indicator for how ready the public is to actually consider the idea of AI existential risk, which I equate to the most extreme scenarios which today only 36%-40% of people believe are even in the realm of possibility. Much more to come on existential risk in the future, so I won’t belabor the point now.

    06 – 2.2x more pessimism about the impacts of AI in women compared to male respondents.

    A gender divide on AI attitudes (Page 14 of full report)

    This finding was really interesting and unexpected to me. The report mentions that this may be likely due to the fact that women may be concerned because of an innate appreciation that
    systemic issues already in place could be exacerbated by AI.” When considered with this in mind, and combined with the chart below showing how attitudes differ by income, the finding makes much more sense.

    A similar divide surfaces when stratified by income. (Page 15 of full report)

    After seeing these charts, I think it’s difficult to come to any other conclusion than the general attitude is that people are worried that AI will exacerbate societal issues and divides, rather than solve them. I think that’s my general viewpoint as well – that the default, inactive path of AI progress will not bring about a world of material abundance, peace, and prosperity for all. Surely that is a potential outcome, but it won’t be the one we end up at naturally or by accident. It will take willful effort on behalf of citizenry, governments, and companies to arrive at this future.

    07 – 1 in 2 students feel daunted by what the future of work looks like to them

    Students, especially those entering college now, or just graduating and entering into the workforce, face an uncertain future – one where the “need” for them isn’t particularly obvious. Take for example a student who just graduated in June of 2025 – they entered high school in Fall 2021, a full year before ChatGPT was even released. Looking at the insane rate of progress through their high school years, it doesn’t take much to imagine the feeling of uneasiness they may have about the rate of AI progress over the next four years, and where that may leave them when looking for employment after they graduate.

    A very mixed bag of emotions and attitudes. (Page 28 of full report)

    This graphic is really striking to me – it alternates between negative and positive impacts that students have felt and foresee about AI. I love the use of the word “daunted” in describing the attitudes of the student cohort. Feeling uneasy about AI and the future really makes sense for people who fall into this category, and I think it’s a very good summation of how the promise and perils of the technology are weighing on people’s minds. There’s also somewhat of a contradiction inherent in this line of questioning – AI may help me in the workplace, but I’m not confident there will be more jobs for me when I graduate. I don’t envy students’ and entry level workers’ positions right now, and am glad for the decade and a half between now and when my son will be considering his place in the workforce for the kinks to be worked out.

    08 – 1 in 2 believe AI development is moving too fast to evolve safely

    AI development is moving extremely fast. And based on levels of capital expenditure and the reliance of global financial markets on a handful of companies that are betting on the promise of the technology, I don’t see any indicators that this progress will slow down. So what do people think should be done about it to ensure it happens safely?

    What do people want to do to ensure AI development happens safely? (Page 36 of full report)

    There are lots of ideas that gained traction in this part of the survey. Interestingly, an initial glance at these proposals indicate to me that the lowest desirability regulations are likely the most politically, economically, or technically implementable. For example – the US already has chip export controls to China, but only 26% of respondents favor that. Experts in AI policy would likely say that this is an extremely important step because it allows the US to maintain its lead in AI development. Conversely, the top suggestion of requiring companies to have a “kill switch” to turn off AI models in an emergency is practically impossible when you consider open source AI models. This chart is pretty indicative of the gulf that exists between experts and the general public, a problem that I think leads to lots of challenges in communication, policy, and technical understanding.

    The AI Publics

    In addition to the key findings of the report, another deliverable that’s worth discussing is Seismic Foundation’s segmentation of the general public into 5 groups.

    Respondents were segmented into 5 groups representing collective attitudes towards AI. (Page 40 of full report)

    Tech-Postive Urbanites

    Page 42 of full report

    The first group, of which I count myself as a proud member of, is the Tech-Positive Urbanites. There is an inherent contradiction within this group – they are much more likely than the rest of the respondents to outsource aspects of their life to AI, but they are also much more likely than the rest of the sample to be worried about AI replacing their job, and perceive AI to already have created lasting harm to society. How can you hold those things in your head at once? Well, I think a lot of it has to do with the idea that things have generally “worked out” for this cohort of society. As technology has proliferated and extended it’s influence on our daily lives, this cohort has likely gotten richer, more comfortable, and more powerful. I think there is definitely a status quo bias at work here – things have gone well for me in the past, and they will probably continue to do so in the future. I’m not sure betting on the continuation of the past is a good strategy with a technology like AI on the table, but time will tell if that’s correct.

    Globalist Guardians

    Page 46 of full report

    The Globalist Guardians are very worried about the future and are overall resistant to using AI in their day to day lives. They believe in strong, multilateral regulations that emphasize cooperation, information sharing, and safety to avoid risks (both specific and existential) from AI. They are concerned about the current state of the world and AI and how further development might increase risks and challenges. I empathize with their viewpoints, especially in the current state of AI where the “benefits” are not immediately recognizable or evenly distributed.

    Anxious Alarmists

    Page 50 of full report

    The entirety of the Anxious Alarmist cohort believes the next generation will have a harder, worse life than ours. They are not only resistant to using AI, but believe it’s nearly guaranteed to make all facets of life worse. I can’t dismiss this viewpoint completely – in fact, depending on your personal lot in life and your media consumption diet, it’s very easy for me to see how people can slot themselves into this category. When stories abound about people being duped into meeting an AI chat bot in person, the emissions that AI produces, and the potential for job replacement, it’s hard not to fall into a pessimistic mindset that you feel is simply the realistic mindset.

    Diverse Dreamers

    Page 54 of full report

    The Diverse Dreamers are a complicated cohort – they are worried about the risks of AI in society, but seem to be more malleable in exactly what should be done about them, and also leave the door open to positive uses of AI in their daily lives. They also have a bit of a contradictory viewpoint in that they strongly agree AI labs act in the best interest of society (21%) at a rate of nearly double the full sample (11%), but they still are pessimistic about the future and worried for their children and future generations.

    Stressed Strivers

    Page 58 of full report

    Stressed Strivers are the most neutral of the groups and are likely the most easily influenced. They have a much higher rate of “don’t know how worried to feel about X use of AI” than the general respondent pool, and they are much more open to AI use in their daily lives than the general respondent pool as well. The perception that AI could automate their job away is much higher in this group. They probably are the most representative of an unstable equilibrium – they don’t have strongly held opinions about AI, but one positive or negative experience could easily sway them in one direction or the other.

    Attitudes, Assessments, and Attention

    So – what do we do with all of this information? The first thing to do is to read it, think about it, and maybe try to do a self-assessment. Are you a Tech-Positive Urbanite like myself? Do you think AI will bring about the downfall of society like the Anxious Alarmists? Have you not put that much thought into it and consider yourself a Stressed Striver? I think some self-reflection is a good start after digesting this post.

    Then, what I would do, is try to seek out viewpoints from people, publications, or groups that do not fit into the same category as you. I think more Tech-Positive Urbanites should listen to what the Globalist Guardians and Anxious Alarmists have to say. That’s not something that comes naturally in our world of hyper-optimized media consumption – you have to seek it out. It’s more natural, and more comfortable, to align yourself with a viewpoint and stick to it, using your own media diet as a reinforcing mechanism for your belief system. I think this is a bad idea, as it really limits the scope of your experience and understanding of the world, and you can very easily come to confident opinions and viewpoints that have been reinforced by curated echo chambers you’ve built and/or have been algorithmically fed.

    So that’s what I think would be helpful for people to do on a personal level. Figure out where you stand and why you think what you do, but don’t be averse to consuming information that may change your opinions.

    Now what do I think AI companies can do about this? The most important thing to do is to understand how these attitudes stack up today. Below is a chart showing the estimated population distribution across the five groups in the countries surveyed.

    It’s very clear from this report that there are more people today who are worried about AI than who are excited about it. The strategy of continuing to integrate and infuse AI into every facet of life is going to be met with resistance if the communication strategy is “It’s going to be great – just trust us!”

    If AI companies want people to believe in the positive potential of AI, then they really need to focus on maximizing the positive and minimizing the negative. This seems obvious, but isn’t straightforward, especially when the areas in which the public thinks AI will be beneficial like healthcare take much longer to materialize than areas in which there is a negative perception, like in the erosion of human relationships.

    I think AI and technology companies would do well to not just focus on what technology can do for us, but what it can do to us. The smartphone is a lesson in getting this balance wrong, and AI has the potential to tip even further into a negative direction. I’m as techno-optimistic as they come, but have undergone an evolution wherein I no longer believe it’s as simple as “MORE TECHNOLOGY = BETTER WORLD.” It probably seems obvious to people that that was a naive belief to begin with, but I think there are probably a lot of people in tech who still do believe this, and build accordingly.

    To paraphrase Casablanca – AI is just like any other technology, but much more so. Getting it right will take a good understanding of the world and society, not just evaluating AI in a vacuum. It will also take lots of empathy and communication for and with people who don’t agree with you, and don’t believe the same things you do. We should try and encourage a world where the appropriate amount of attention is paid to the development of AI an its integration into society such that average people have a say in the future and are not passive participants in building it. It’s not going to be easy, but it’ll be worth it to try and get us to the best outcome possible.

  • Self Driving Salvation – A Worthy, Thorny Pursuit

    Every day in the United States, commuters, parents, children, and workers enter their vehicles and travel an astonishing 8,750,000,000 miles per day (for those counting, that’s 8.75 billion miles per day). Most of those trips are routine — going to the grocery store, a doctor’s appointment, or to the office. But for roughly 40,000 people per year, that will be the last trip they ever take.

    Road fatalities in the US reversed their gradual decade over decade decline starting in the early 2010s (texting and driving anyone?) and have settled around that 40,000 number for the past several years. That’s about 110 people every day that lose their lives on the roads. I believe that in the 21st century, we can make road fatalities as rare as getting struck by lightning (300 people per year), but doing so will require a massive amount of coordination, safety testing, and societal adjustment. The dream of autonomous, perfectly safe vehicles that do not crash is attainable, and a worthy goal we should strive for.

    A Brief History of Autonomous Driving

    The first inklings of desire for a driverless future arose in 1925. An electrical engineer named Francis P. Houdina rigged a vehicle with motors and a radio antenna that allowed him to control the speed and direction of the car remotely. General Motors developed Futurama for the 1939 World’s Fair. This exhibit and ride correctly predicted a vast, interconnected highway system that came to fruition through Eisenhower’s commitment to federal highway construction. Unfortunately, radio-controlled automatic highways did not. George Jetson hopped in his flying car, punched in his destination, and was autonomously whisked away to work.

    If GM had access to AI image generation tools, maybe this is what “Futurama” may have looked like

    It wasn’t until about the 1980s when the dream of a self-driving car inched its way forward on the spectrum of possibility. Teams from Carnegie Mellon and Mercedes Benz created vehicles that could self-drive under certain conditions. In 1995, another car built at Carnegie Mellon completed 98% of a cross-country road trip without human intervention. In 2004, DARPA created a Grand Challenge competition that invited participants to build autonomous vehicles to navigate a 150-mile course. The best performing vehicle completed 7.32 miles in the inaugural edition of this race. The next year, a team from Stanford University unleashed Stanley (pictured below) on the course, and claimed victory, finishing in 6 hours and 54 minutes.

    Stanley was a diesel Volkswagen Touareg equipped with rooftop LIDAR units, an electric motor to control the steering wheel, and a hydraulic piston to shift gears.

    Excited by the promise of self-driving, and inspired by the successes in the DARPA grand challenges, Google launched their own self-driving car project in 2009. Tesla introduced “Autopilot” in 2014 which enabled lane-centering and speed control without driver intervention. As competition in the sector ramped up, the first of fatality involving a self-driving car occurred. In 2018, a pedestrian named Elaine Herzberg was struck and killed when an self-driving Uber failed to detect her walking a bicycle across a highway. A five year legal battle ensued, with Uber ultimately being cleared of criminal wrongdoing, and the supervising human driver pleading guilty to endangerment. The case made national headlines due to the uniqueness of the event and the ethical concerns regarding it. More on that later.

    Fast forward to today, and the state of autonomous driving has continued to advance. Driverless Waymos inhabit the streets of Los Angeles, Phoenix, San Francisco, Phoenix, and Austin. Just last week, Tesla rolled out it’s long-hyped (In 2019, Elon Musk predicted a million robotaxis on the road by 2020) robotaxi service in Austin as well. The autonomous driving future isn’t here, but the seeds are planted, and cultivation is ongoing.

    How an autonomous future comes to pass

    Now that we’ve got a brief history out of the way, it’s important to explore how the technology is categorized and how it works today, what the differing approaches amongst competitors are, and how these approaches and strategies might evolve in the future to deliver on the lofty goal of fully autonomous driving.

    In 2014, SAE International, an automotive standardization body, published an initial classification system that aimed to codify a spectrum of autonomous vehicle capability. The latest version, updated in 2021, is pictured below.

    SAE J3016 Levels of Driving Automation

    It’s useful to have this reference available when thinking about the progress made on self-driving so far, and where it is headed in the future. Many new cars today come with features that would classify as SAE Level 2, such as lane assist, adaptive cruise control, and brake assist. So you might even have experience with autonomous driving today – you just didn’t know it was classified as that. Currently, companies like Waymo and Tesla are focused on developing Level 4 autonomous driving. Some characteristics of these self-driving vehicles are operation within a specific, pre-defined geofenced area and the lack of a human driver behind the wheel.

    When it comes to the technology stack that companies are using to pursue autonomous driving, there are two basic approaches – Tesla (Camera Only + AI) vs. Waymo (3D mapping + Camera + Lidar + Radar + AI), outlined in the graphic below from Bloomberg.

    As you can see, and probably surmise, Tesla’s approach is far more scaleable and cost-effective. The sticker price for a Waymo vehicle is around $180,000, the high cost of Lidar and Radar units contributing significantly to that amount. Additionally, Waymo relies on highly detailed 3D mapping of the geofenced area in which it operates. So Tesla has an advantage when it comes to cost and scalability, but will losing the additional sensor information gained from Lidar and Radar, and operating without a 3D map for reference, reduce the overall safety of autonomous Tesla Robotaxis? I think it’s definitely too soon to tell definitively and to what degree, and I also think there’s more to the safety story than just statistics.

    Safety Statistics, Failure Modes, and Human Factors

    Beyond the enormous total addressable market for taking over the role of the human driver, and the astounding economic value that could potentially be captured, there is one outcome of a driverless future that is unassailably “good” — reducing road fatalities to zero. Autonomous driving, when rolled it in a responsible way and operating under conditions that are appropriately constrained to the technological ability of the system, is already safer than human driving. Waymo recently released a detailed report that provides a comprehensive overview of the safety benefits achieved by their automated fleets.

    Waymos are already safer than human drivers when comparing accident rates over mileage travelled

    These results are very promising. Who wouldn’t want to live in a world where we could reduce crashes by 90%? Impressive as they are, these statistics come from an extremely small “sample size” when compared to the total vehicle miles travelled each day, and rely on an expensive, gold standard technology stack that incorporates data from multimodal sensor arrays and detailed 3D mapping of the areas in which they operate. It will be very interesting to see the safety reports around Tesla’s Robotaxi offering, as that system relies solely on camera input and AI systems to pilot the cars.

    Statistics, however, are not the only piece of the puzzle when we think about how these autonomous vehicles are going to be able to be integrated into our lives. An interesting phenomenon that I’ve observed that’s going to have an outsized impact on the general public’s appetite for accepting self-driving cars is the fact that the “failure modes” for these autonomous vehicles are sometimes non-sensical. Because these systems operate completely differently from a human driver, sometimes when they make a mistake, they make a mistake that a human absolutely would not make. I’ve collected a few examples below.

    A Waymo speeds through a flooded sinkhole, completely ignoring a public works crew that was attempting to block off the scene and redirect traffic

    A Tesla Robotaxi fails to stop when a UPS truck begins to back up, prompting the safety monitor to stop the car.

    Another Tesla Robotaxi slams on the breaks twice when it notices police cars on the side of the road

    These are three examples of behavior that depart completely from the way an attentive human driver would handle these situations. Even novice drivers would know to stop or adjust their course when confronted with a public works crew guarding a flooded sinkhole, apply the brakes when a vehicle begins slowing down and then backing up in front of them, and realize that stationary police cars on the side of the road not impeding traffic is not cause for slamming on your brakes.

    There is already a cottage industry popping up of collecting and sharing these autonomous driving fails. The Verge compiled a list of these events and even the relatively pro-autonomy Self Driving Cars subreddit is keeping track. In an effort to stay as neutral as possible, I won’t condemn this behavior — I actually think it’s really important to collect data on these failure modes and to spread awareness of them to prevent the technology from rolling out before it’s ready for primetime. Despite this, I also think it’s going to present a very difficult challenge to appropriately frame these failures against all of the safe, successful miles that these cars drive, as evidenced by Waymos safety report. In keeping with the old adage of “If it bleeds, it leads”, depictions of these failures are much more likely to be “newsworthy” than a boring summary of safety statistics. Add to this the fact that there is a lot of social media clout to be gathered by dunking on AI, we’re far likelier to see and be moved by videos of these failures than overall safety metrics.

    A final point that’s important to explore, one that’s slightly related to the proliferation of and appetite for these failure videos, is the fact that the way humans perceive and understand information is going to significantly impact the acceptance of self-driving. People are not naturally good at effectively and dispassionately assessing risk, disconnecting their personal feelings and beliefs from the reality of the situation. A prime example is the fear of flying. You are far, far, far more likely to die in a car accident than you are in a plane crash, but have you ever heard of someone who is afraid of riding in a car? Probably not.

    There are a lot of reasons the that fear of flying exists. Every commercial aviation accident drives worldwide headlines, plane crashes are more likely to be fatal than car accidents, and when you’re flying commercially, you have no control over the situation. These facts drive emotions and perceptions about the safety of driving vs. flying, and no matter how many statistics you cite like deaths-per-passenger-miles, people are still going to be afraid of flying. I don’t think self-driving advocates are going to effectively convince people about the benefits of this technology by statistics-spamming.

    The Good Future and How to Get There

    I’ve spent much of this post discussing some of the pitfalls and challenges that face self-driving cars today. I think it’s really important to be intellectually honest, and handwaving the current state of the technology, warts and all, is intellectually dishonest. I want to conclude this post though by talking about how I think we get to the best version of the future and what that good version of the future might mean.

    First, I think it’s absolutely imperative that the federal government embarks on building a regulatory framework that’s based on the SAE levels of autonomous driving. I don’t think it should preempt the local experiments that Waymo and Tesla are doing in various cities around the country, but I think it’s going to be important to lay the ground work for federal regulations around self-driving cars. If I could wave a magic wand, I’d want a huge portion of the Nevada desert to act as a self-driving proving ground, consistently incorporating new edge cases and learnings from self-driving fails to iteratively improve the technology.

    Additionally, speaking from a magic wand standpoint, I’d want to start collecting video data from the millions of cars on the road today. There are billions of miles of data produced every day that would be hugely beneficial in training vision models, identifying edge cases and behavioral preferences in the vast problem space of driving. This would possibly be a privacy nightmare, and I don’t know how you could implement it effectively or ethically. Perhaps something like an insurance company offering reduced rates if the vehicle collects this data. Again, keep the magic wand, rather than the panopticon, in mind.

    Finally, I’d love if everyone would just become a bit more neutral about this topic. That’s kind of the point of Clearly Intelligent, and I know it’s going to be a hard fought battle. But if autonomous driving companies could spend less time talking about eliminating all human drivers in the next 10 years, and the public could break the status quo bias of accepting 40,000 road fatalities in a year in the name of ‘keeping humans in charge” of driving, we might actually be able to chart a path forward.

    The good future I envision not only involves rare-as-lightning-strike road fatalities, but also redesigned cities, with more plentiful housing and denser, richer communities. A future where car ownership isn’t a necessity to simply exist in many parts of the country. A future where clean, safe, reliable transportation options change the way we move around.

    There’s much more to say on this, and it’s a topic I’ll be coming back to regularly. Advanced technology helps us reframe what’s possible and helps solve major problems that exist in the world. Autonomous driving is a perfect example of this, and even if the road we’ll travel is bumpy, there’s a good future we can achieve, as long as we are intentional and measured in pursuit of it.

  • Alphafold, Isomorphic Labs, and the potential of AI for science

    Science is incredible. The practice of observing the world around us, stating hypotheses, designing experiments, collecting data, analyzing results, and sharing the work with peers has propelled our civilization forward in countless ways. Ancient Babylonian astronomers tracked the movement of celestial bodies and used this information to refine their calendar’s accuracy and generate the first planetary theory in human civilization. During the Islamic Golden Age, which spanned the 8th century to the 13th century scholars devised experiments to understand the characteristics of light and vision and fathered algebra. Newton, Galileo, Da Vinci, and countless other Renaissance scientists, propelled by the deluge of shareable information made possible by the printing press and practice within a newly formalized framework called the scientific method, discovered and created even more scientific advances. The Enlightenment, Industrial Revolution, and Information Age have all served as additional catalysts, amplifying the speed and magnitude of our collective scientific understanding. AI has the potential to equal or surpass these force multipliers of the past and vastly expand our ability to observe, understand, and engineer our world.

    Alphafold – Predicting the shape of life

    I was inspired to write this post after watching a great interview with Max Jaderberg and Rebecca Paul of Isomorphic Labs, a drug discovery company spun off from Google DeepMind and now part of Google’s parent company, Alphabet, Inc. In the interview, host Professor Hannah Fry and her guests discuss the potential for AI in drug discovery, the way humans and AI collaborate in the drug discovery process today, and what future AI capabilities might unlock for scientific understanding.

    A foundational technology to Isomorphic’s founding and approach to drug discovery is AlphaFold – an AI program that was built by Google DeepMind with the goal of being able to predict a protein’s 3D structure from it’s amino acid sequence. Proteins are fundamental biological molecules that are responsible for a vast amount of activity that occurs in living beings, from transporting molecules to carrying out the chemical reactions that take place in cells. It’s fairly rudimentary to determine a protein’s amino acid sequence, but until AlphaFold, it was extremely difficult and labor intensive to determine the 3D structure of a protein. Determining a protein’s structure in a lab using techniques like X-ray crystallography, where scientists crystallize a protein, blast it with X-rays, then analyze the diffraction pattern of those X-rays, can take months or years and cost several hundred thousand dollars.

    AlphaFold enabled highly accurate estimation of a protein’s 3D structure, placing first in a competition designed to assess this exact capability. AlphaFold 2 scored even higher, and AlphaFold 3 extended the scope of the system to complexes created by proteins with DNA, RNA, ligands, and ions. In recognition of this incredible work, Sir Demis Hassabis and John Jumper of Google DeepMind shared half of the 2024 Nobel Prize in Chemistry for AlphaFold 3.

    From structures to molecular candidates

    Why is determining a protein’s structure so crucial for drug discovery? Because drugs work by fitting into a protein’s 3D structure like a key fitting into a lock. As explained in the interview, this geometric reality has been previously willed into existence experimentally, as medicinal chemists create candidate molecules and test their ability to interact with the target protein in a successful manner to mitigate a disease mechanism.

    Thanks to AlphaFold 3, this can now be done in-silico — on a computer.

    Screenshot of AlphaFold 3 platform showing a candidate molecule’s 3D structure interacting with the grayed out 3D protein structure (left) and the molecule’s 2D chemical structure (right)

    By previewing the molecule’s predicted interactions with a protein, and being able to make changes within the platform, scientists can understand the candidate molecule’s likelihood of success and tweak it in seconds, instantly viewing the new result.

    It’s vitally important to understand that these tools do not replace the need for experimentation in the real world – the “wet lab.” What they do however, is allow for much more expansive and time-efficient experimentation virtually, making precious experimental effort in the real world more valuable and efficacious. If you’re going to spend time in a lab testing molecules, by using systems like AlphaFold 3, you can gain additional confidence that the molecule you’re working on has a higher probability of success than if you had worked without a pre-validation step of testing it virtually. Does that mean that specific molecule will surely work out, execute the exact mechanism needed to treat a disease, and go on to be successful in clinical trials? No – there’s no guarantee of success. But if scientists can use AI tools to make each “shot on goal” more likely to succeed, it follows logically that they could drastically shorten the time needed to arrive at a successful outcome.

    AI-Human Collaboration

    In five years time, doing drug discovery without AI will be like doing any sort of science without math.

    Max Jaderberg, Chief AI Officer, Isomorphic Labs

    There are lots of exciting takeaways from this interview – who wouldn’t be excited about the hyperbolic prospect of curing all diseases in 10 years – but the most exciting part to me is thinking about using AI to accelerate the speed of scientific discovery and making intractable problems tractable.

    Chemical space, basically the number of theoretical possible molecular structures that exist, is frequently cited to be around 10^60 structures – that’s 10 with 60 zeroes after it. To put that in perspective, if each of the 10^20 grains of sand on Earth was in fact it’s own Earth with 10^20 grains of sand on it, then each of those “grainchildren” would also have to be their own Earth, with their own 10^20 grains of sand for us to equal the vast size of chemical space. When Isomorphic uses groundbreaking technology like AlphaFold 3 as a wayfinder in that vast space, they have the potential to massively speed up the process of bringing life-changing treatments to market.

    AI for Science = AI for Good

    Strong opinions about AI are forming rapidly as it works its way into the social, economic, and technological facets of our society. Pew Research recently surveyed AI experts and the general public about their views on AI. There are lots of interesting datapoints in the survey, and I plan on doing a full post on it soon. But I want to draw attention to a specific line of questioning related to AI having a positive or very positive impact in certain areas.

    There is one standout category from this line of questioning – Medical Care. A huge majority of AI experts believe that AI’s impact will be positive in the domain of medical care, and a whopping 44% of the general public does as well. In a sea of discontent, hype, doom, and fear, using AI to increase our ability to lead healthier, longer lives less affected by disease sounds like a pretty good future for us all to rally around. Companies like Isomorphic are leading the charge, and open source breakthroughs, like the recently announced Boltz-2 AI model that predicts drug-binding affinity (another key consideration for drug design) will help accelerate progress. I’m confident that this slice of the future is bright, but we’ll have to navigate choppy waters during the journey there.

    Timing is everything

    Accelerating the drug discovery process is a worthwhile endeavor, and I’ll be following and rooting for the companies looking to do so. But the results from these efforts will still take time – very likely in the 5 to 10 year time frame before compounds come to market from Isomorphic or other players in the space. In that time, I really worry about the negative impacts that AI could have, from job displacement, to personalized election misinformation, to enabling a further retreat into socially isolated lives powered by hyper-optimized generated content. These negative potential outcomes could further entrench views about AI, and even erode goodwill that has built up for more generally accepted altruistic applications of AI, like using it to supercharge scientific progress.

    In a world where negative and polarizing news drives the most engagement, and the networks we use to consume that news prioritize engagement above all else, it’s an uphill battle to get people to pay attention to the potential that AI has to accelerate our understanding of the world and our ability to engineer a better future. It’s also difficult to expect people to have a nuanced view of AI technologies when they are frequently lumped together as a monolith rather than viewed as separable efforts that have both obviously good and obviously bad use cases. That’s part of the mission here at Clearly Intelligent – to enable my audience to understand and form coherent and nuanced views on the promise and perils of AI.

    I’ll end this post with a great graphic I came across recently that charts the pace of scientific progress throughout human history. It’s truly incredible to look at how far we’ve come from our earliest days as a species, and how rapidly we have been able to advance in recent history. As we enter the age of abundant intelligence, we have the opportunity to point it at the most pressing problems we still face, and I hope we use that power as a force for net good.

  • People have been concerned with new technology’s impact on the areas of labor and work for as long as the concept of labor and work have existed. Take for example the humble plow, a tool that shifted the measure of a person’s ability to extract value from the earth from the amount of work they could do with their bare hands to the amount of land they owned and harvested. Had newspapers existed in the early agricultural age, proto-reporters surely would have pulled quotes from concerned furrow-diggers about what the future held in store for them.

    Last week, there seemed to be a noticeable uptick in media coverage about AI and the impact that it will have on jobs in the future. Time Magazine, Axios, and The New York Times each had articles worth reading. Time laid out potential ways to address the upheaval many predict will occur while Axios spoke at length with Dario Amodei, CEO of Anthropic, about his predictions for a future full of broadly capable AGI systems and how there could be significant impact on jobs in just the next few years. The New York Times focused a bit more narrowly on the impact AI is already having on the job market for recent college grads.

    In addition to these reading these articles, I saw the following 11-minute clip that stitched together lots of predictions.

    In sharing this compilation, I’m not endorsing the viewpoints or predictions therein. Nor am I compelled by the broad and sensational headline “The Great AI Job Displacement Is Closer Than You Think.” It’s important to me to showcase the viewpoints of individuals that are working to run major AI labs, have vast experience in the AI space, and have paid close attention to recent progress.

    Here are a few of the most staggering quotes from the video:

    “I’m actually afraid of the world where 30% of human labor becomes fully automated by AI and the other 70%…that’s going to cause this incredible class war between the groups that have been and the groups that haven’t been” – Dario Amodei, CEO of Anthropic

    “That doesn’t mean the transition isn’t going to be messy, in fact, I expect it in some ways to be pretty painful” – Sam Altman, CEO of OpenAI

    “Psychosocially, it’s very disturbing that you can no longer tell people what kind of world they should prepare their kids or grandkids for.” – Tyler Cowen, Marginal Revolution

    Each of these utterances portends a future with serious challenges. Potential for class war, an economic transition of indeterminate length and unpredictable levels of strife, and most of all a range of potential outcomes so vast that it becomes impossible to predict, let alone prepare for. Imagine if the messages in that clip didn’t come from the stages of conferences or the interiors of well-equipped podcast studios, and instead, you encountered them walking down a city street.

    If I saw this man, I’d hastily move to the other side of the street, not letting his proclamations take up space in my mind. It’s easy to ignore sweeping statements about a vastly different and uncertain future, especially in a world that has so many ongoing, more tangible challenges. It’s also understandable to chalk these statements up as hype — chum for investors looking to secure a slice of the trillion dollar labor market these predicted drop-in remote workers of the future could dominate. Realistically, the likeliest outcome lies somewhere in between automating all white-collar work by 2030 and producing systems that are economically useless and therefore inconsequential.

    Impactful Assumptions

    This post isn’t meant to focus on my own personal predictions, or to dissect individual predictions made in the articles or video I’ve discussed so far. There are thousands of voices out there who have done that, and they’ll continue to make predictions as we integrate AI into our lives. Something I will do though is list out a few assumptions I think the people expressing the viewpoints covered in this entry are making that hugely impact both the timeline for and magnitude of AI’s impact on the job market.

    Assumption #1 – Jobs can be completely decomposed into a discrete and finite set of tasks that have clear indicators of task success and task failure.

    For a job to be completely replaced by AI, all of the tasks associated with that job must be identified and either completed by that AI, delegated to another job function, or eliminated. The process of compiling an exhaustive list of everything you’re responsible for doing as part of your job responsibilities is probably going to be pretty difficult and time consuming. Now multiply that effort across every organization, every job function, and every position. That’s a lot of work – not something done at the snap of a CEO’s finger.

    In addition to the herculean effort of enumerating all of these tasks, I think there’s probably a bit of a cognitive bias at play when leaders in the AI field talk about the ease of automating work. An example that’s often cited as evidence that we are clearly headed for an automated future is the position of Software Engineer – a computer programmer. Today’s most advanced models already perform extremely well in this domain, and empowering them with agentic abilities will increase their utility in a commercial important way. But there’s something particularly nifty about computer code that separates it from the work product produced in many other tasks – it’s easily verifiable. Either the program compiles, runs, and passes testing, or it doesn’t. This sort of pass/fail task lends itself remarkably well to automation because most AI systems are trained using reinforcement learning. Very simply, they see lots of good examples and lots of bad examples, and learn to do more of the things associated with the good examples than the bad. Repeat this several thousand/million times and you get a system that’s performant.

    Where are there a lot of software engineers? AI companies. If you’re surrounded by people who are doing easily demarcated tasks that have verifiably good or verifiably bad results, it might be natural to over-index on this first person experience and believe that much of the work that goes on in the economy shares those characteristics. I don’t think that’s necessarily the case, and bumping up against this fact when broadening the scope of the problems that these systems tackle may result in slower timelines than anticipated.

    Assumption #2 – User, customer, and market preferences are multifaceted, and include a component of acceptance for inclusion of AI technology. Eventually the benefits of including AI outweigh factors that may make someone less likely to want to engage with an AI offering.

    If I were to pitch you a new software solution that promises to replace your entire accounting department with an army of AI agents at a tenth of the cost, that deal may sound pretty good. There’s a catch though, it can’t present at your quarterly board meetings like your current head of accounting does. Oh, and it’s not very persuasive chasing down past due accounts. Those tradeoffs might be worth it, so you let your team go and install your agents. For a 90% discount, you’ll stomach some discomfort and shift those tasks elsewhere.

    These tradeoffs might not always be worth it, and sometimes will downright backfire. In a recent example, Swedish fintech firm Klarna recently hired back some human employees after AI customer service bots frustrated customers and reduced service quality.

    Just because an AI replacement is capable of performing some economically valuable activity doesn’t mean that whoever is on the receiving end is going to want AI to do that work. Sometimes, they will swallow the tradeoff. Sometimes, they’ll vote with their feet and change their consumer behavior. And if a manager cannot reliably understand how a given conclusion was arrived at, or cannot wholly replace an entire job function, it may be easier to stick with the status quo of human labor.

    Assumption #3 – The problems that face current day AI systems, like hallucinations narrowly, and general inscrutability broadly, will be solved well enough to make their functional deployment tenable.

    Hallucinations, when a model fabricates information, answers, hyperlinks, or court cases still plague large language model-based AI systems today. While hallucination rates have trended down over time, and savvy users of these tools know to look out for these errors, there’s not yet a silver bullet to address them. One mistake in one response to one prompt is bad – but what happens if that hallucination occurs in the initial step of a many-step workflow undertaken by an autonomous agent? That butterfly wing flap might substantially throw off the end work product, making the entire endeavor worthless. Add to this the fact that despite advances in mechanistic interpretability research — the study of how AI systems “think” — we still don’t have much of an idea of how exactly these systems work, so it’s not like you can reliably interrogate the process used to arrive at the final result. Some systems have started to show users the reasoning used to arrive at an output, but this feature currently lacks the depth and completeness necessary to provide an exact and thorough accounting in the way that a human employee could.

    Compounding and Additive Effects

    The extent to which these assumptions are accurate will have a huge impact on AI’s ability to have real impact in the economy. IF jobs can be easily and completely broken down into discrete task lists AND IF consumers of AI work output accept any tradeoffs or shortcomings AND IF problems like hallucinations and interpretability get sufficiently addressed – we get one version of the future. This version of the future results in drop-in remote workers, a country of geniuses in a data center, and the fundamental restructuring of white-collar work.

    If however, it’s hard to fully account for and replace every action completed in a job role, consumers and organizations resist AI integration due to lack of efficacy or completeness, and the challenges of current day AI systems persist, that’s a very different future. This future looks more like a patchwork implementation of agents with varying levels of autonomy that work in more narrowly and well-defined domains. Still economically impactful, but not as capable of swiftly wiping out entire job categories.

    I think the next 12-30 months (from mid-2025 to the end of 2027) will give us a good indication of the true level of job displacement we might expect to see before the end of the decade. As AI model providers turn more of their attention (and compute) to reinforcement learning paradigms, it will become clear whether or not we have/can collect sufficient data to train models in the tasks that make up these white-collar jobs or if the models become smart enough to generalize to many domains of computer-focused work without explicitly training on them. AI agents that can use computers like people do – i.e. navigating a browser, opening a webpage, filling out fields – are still in their infancy. Currently, they are not capable in the ways that would be necessary to replace white-collar workers, working too slowly or too unreliably to make for an effective substitute.

    Not Everything, Not Everywhere, Not All At Once

    I expect AI to have a huge impact on the way we work. A recent survey indicated that 42% of workers are using Generative AI tools in a professional capacity, and that number shows no sign of slowing down. I don’t think we will have a drop-in, generally capable remote worker by 2027. But by 2030? It’s more likely by then. And the likelihood of this scenario, and penetration into the real economy, will increase year after year. The scenario Dario Amodei is worried about, where 30% of jobs are capable of being replaced by AI and 70% aren’t, seems a lot more likely to me than 0% replaceability or 100% replaceability. To his credit, he has made the media rounds himself trying to raise the alarm about these economic possibilities.

    How long does it take to get to 30% of jobs being replaced? Then from 30-31%, and 31-50%? If it’s over the course of decades, I think we’re better equipped to absorb that societally than if it takes three years.

    I don’t want you to brush off the proclamations in that video clip as all hype and bluster. I also don’t want you to leave this post with an overwhelming sense of dread about your job being replaced in the next several years. My hope is that you stay tuned, here and anywhere else you see fit, to understand the rapidly changing AI landscape and seek out nuance in a sea of embellishment, bravado, and naysaying.

  • Last week, Google hosted their annual I/O conference at the Shoreline Amphitheatre in Mountain View, CA. They announced a slew of new ideas and products that range from AI mode for search to a tool that allows users to virtually try on outfits to a prototype homework tutor that sees what a student sees and helps them out. AI was mentioned 92 times during the keynote, which isn’t surprising if you’ve paid attention to events like these over the last several years. What is surprising is that one of these announcements has already broken out of keynote land and into our social media feeds.

    Veo 3

    Google kicked off the conference with a video entirely generated by Veo 3 – their latest video generation model. It’s a short, whimsical vignette of an Old West town populated by a menagerie of animals, complete with squishy gummy bears and convincingly falling confetti.

    There are few things that really stuck out to me watching this video that I think set Veo 3 apart from other video generation tools to date.

    • Realistic Physics – the way the animals walk, feathers fly, and objects interact with each other represent some of the best imitation of our physical reality I’ve seen so far. Getting this right is crucial to making a realistic video, as our human eye can easily pickup on inconsistencies with our real-world physical experiences.
    • Fidelity and Realism – the rider’s skin is still a little too perfect, the light on the chocolate bar is too uniform, and the chicken clap action is a little jerky, but these are three nits in a video with thousands of good-enough-to-pass features. More on this in a bit.
    • Sound – this is Veo 3’s real breakthrough. The ability to pass in text as a prompt and generate convincing speech that’s matched to the subject’s lips is something that’s new to the video generation paradigm.

    Blurring the Lines between Real and Generated video

    Access to Veo 3 is available now to anyone willing to part with $249.99 a month for Google’s AI Ultra plan (initially announced with a 50% discount for the first three months). Because this tool immediately got into the hands of creators, thousands of examples have already started to populate the internet.

    An early video that appeared tested a confusing but well-known AI video benchmark – Will Smith eating spaghetti. Here’s a comparison of an AI fresh prince chowing down from 2023, 2024, and 2025, generated by Veo 3.

    It’s easy to see the improvement in these results over two years time, evolving from a strange, quite off-putting mimicry of the general concept of eating spaghetti to a convincing video – save for the audible crunchiness of the soft noodles. Tools like Veo 3 are going to make it easier and easier for anyone to create videos that don’t immediately betray themselves as AI-generated. The next example is the one that spurred me to choose this topic to cover for an early installation here at Clearly Intelligent.

    Emotional Support Kangaroo

    I’ve seen hundreds, if not thousands, of AI-generated videos. It’s always been relatively easy to spot imperfections, inconsistencies, and downright impossibilities in these videos that indicate their provenance. I genuinely think the following video is the first that I consumed and scrolled right by, with no idea that it was AI.

    To be fair to myself, the version that I saw had no “AI” indicator or community note like the tweet above. And since the initial appearance, many instances of the video have disclaimers or community notes attached to them indicating AI-generation. But how many of the millions of people that viewed this video across dozens of platforms saw an AI disclaimer, committed it to memory, and have gone back to whomever they shared the video with to tell them they didn’t in fact witness an emotional support kangaroo innocently holding his boarding pass while his human argued with the gate agent.

    Critical Consumption

    I tend to think of myself as a relatively savvy consumer of information. That’s why this specific example struck a chord with me. Everything about it was just believable enough – why couldn’t someone have an emotional support kangaroo – that nothing in the video sent a strong enough signal to motivate a more critical viewing. Maybe if it had, I would have noticed that the speech sounded like gibberish, and that the audio didn’t perfectly sync up with the lip movements. But it didn’t, and I went on believing in make-believe for an entire day before seeing the truth come to light. And I was far from the only person fooled.

    Why it matters

    Had I gone on believing that the video was in fact real, my life probably wouldn’t have been that different. Maybe I confidently put down “kangaroo” in a service animal related trivia question one day and lose the round for my team. The specific impact of this AI-generated video is tiny, forgotten in a week by most among the onslaught of new viral moments. What I’m more interested in is the general impact of AI-generated videos that pass for real and that don’t inspire the kind of scrutiny that might cause viewers to question them.

    What happens when the subject of one of these videos isn’t a meek marsupial, but a politician advocating for a policy position they don’t in fact support? Or a violent crime that hasn’t actually happened? I could see a future when an authentic video that’s embarrassing or damaging to a person, cause, or organization is labeled as AI-generated by supporters to obscure the true nature of the video and avoid the fallout from it.

    Our shared view of reality, and general agreement on basic facts that we once took for granted has already disintegrated, influenced by social media algorithms and real-life filter bubbles. Video generation tools that create AI-generated videos that are indistinguishable from real life could enable bad actors to deepen those divisions, cement tribalistic viewpoints, and create controversies from whole cloth.

    Veo 3 is an incredible technical achievement, and as a closed-source tool provided by one of the most influential companies in the world, there’s a vested interest in creating and maintaining the right guardrails to discourage obviously nefarious use. Additionally, given the fact that Google likely trained Veo 3 on an immense corpus of YouTube data and has plentiful compute resources means it’s unlikely an open-source alternative appears in the immediate future that matches the capability of Veo 3. But not immediately doesn’t mean never.

    Norms and Incentives

    We’re still in the early days of AI-generated videos, and we haven’t yet collectively developed a set of established norms around them. Different platforms have different rules about labelling content as AI-generated, and different ways of implementing those labels.

    Generally, platforms are incentivized to maximize engagement – so what happens if those AI labels drive engagement down? Do creators stop creating AI-generated videos, or do the platforms relax or change their rules around them? Will users flock to services that clearly delineate the real vs. the artificial? Will a platform come out with a hardline “no AI allowed” stance? How does that get enforced?

    What lies ahead

    If you peruse posts on X/Twitter, you’ll encounter countless declarations that actors will be out of a job soon and that Hollywood as we know it will never be the same because of Veo 3. I don’t think either of those observations are likely to come true because they conflate technical capability and visual fidelity with a subjective quality of the finished product – that it’s good.

    A theme you’ll see frequently here at Clearly Intelligent is that I’m hesitant to characterize things as “good” or “bad”. AI is an excellent example of a dual-use technology – one that can be used for both beneficial and sinister purposes. So, I don’t think generative video tools are “good” or “bad” – and I don’t think you should think of them like that either. Instead, evaluate their current capabilities, consume content critically, and be expansive in your consideration about their promise and perils.

  • Welcome to Clearly Intelligent. I’m Mike Cottone, a consultant by day and an artificial intelligence commentator and communicator by night. I’m embarking on this project because I believe that we are in the early days of a period of profound technological change. I have two main goals in mind with this endeavor.

    Goal #1

    Expand understanding of AI technologies and their impact on our world.

    Goal #2

    Steer the world toward a better future.

    Artificial intelligence has the potential to change nearly every aspect of our lives, but the path before us is unpredictable and complex. Join me as we navigate it together.