Late last month, Casey Newton wrote “Why note-taking apps don’t make us smarter” for The Platformer. It’s been an open tab on my browser for a bit now and I wish I had given it my full attention as soon as I opened it. It is excellent and speaks to the challenges that analysts—if not all of us—face when trying to discover interesting and potentially important insights and information on the Internet.
From Research to Thinking to Insight
Newton talks about his experiences as a journalist which were very familiar to me as a recovering analyst. I strongly encourage you to read the piece for yourself but here are a few of Newton’s thoughts and observations that resonated with me:
“I have so much information at hand that I feel paralyzed.”
“…tags are more about search. Bidirectional links, which some apps show you on pages that include snippets of all the other notes that contain the same link, are more about browsing and rediscovery.” [I tend to use tags for rediscovery but their utility as a rediscovery mechanism does not scale: “rediscovering” 200 articles that I applied the same tags to does not necessarily help me.]
“But the original promise of [a note-taking app] — that it would improve my thinking by helping me to build a knowledge base and discover new ideas — fizzled completely.” [I felt the same way about mind mapping software as well.]
“One interpretation of these events is that the software failed: that journaling and souped-up links simply don’t have the power some of us once hoped they did. Another view, though, is that they are up against a much stronger foe — the infinite daily distractions of the internet.”
“It’s here that AI should be able to help. Within some reasonable period of time, I expect that I will be able to talk to my Notion database as if it’s ChatGPT.”
“Today’s chatbots can’t do any of this to a reporter’s standard. The training data often stops in 2021, for one thing. The bots continue to make stuff up, and struggle to cite their sources.”
“…it is probably a mistake, in the end, to ask software to improve our thinking…. The reason, sadly, is that thinking takes place in your brain.”
“…thinking is an active pursuit — one that often happens when you are spending long stretches of time staring into space, then writing a bit, and then staring into space a bit more. It’s here that the connections are made and the insights are formed.”
If you’ve found yourself nodding along to these quotes, do yourself a favor and read Newton’s entire piece. It is worth the time.
The Onion of Our Interests
In his piece, Newton focused on his work but I think the same is true in our personal lives. AI as an alternative approach to discovery is focused on big data vice big ideas.
As a professional analyst, I am responsible for triaging and making sense of large volumes of diverse information related to a handful of topics. As a person, my interests are far more diverse, transitory, and of differing depths. For example:
If I want to know what’s happening on a day-to-day basis, I turn to The New York Times, The Washington Post, The Oregonian, Oregon Public Broadcasting, and Willamette Week.
During the week, time permitting, I turn to more thoughtful takes on current events and contemporary trends with The Atlantic, The Economist, etc.
Also on a weekly basis I will turn to perspective-broadening sources like Kottke and MetaFilter as well as Noema Magazine, Aeon, Psyche, and Nautilus. Why? My sense of “interesting” is dynamic. Why the information is interesting, or how long I might find the information interesting, varies greatly.
The one thing missing from this list? Text messages from friends forwarding me articles that they think I might find interesting based on their understanding of my or our shared interests.
Time and bandwidth permitting, I might sneak in a podcast or two as well.
All in all, far too much information to triage let alone consume and try to make sense of. I feel like Newton except that rather than being paralyzed by the massive amounts of information that are part and parcel of modern living, I am drowning in it because I have to do my own triage. The information paradox—that we’re drowning in information and starving for knowledge—is every bit as real today as it was when discussed by IEEE in 2017.

FOMO—the fear of missing out—is very real to experts and passionate enthusiasts alike because they might miss the one article that contains the one thought that helps them better understand or think differently about their professional and personal interests.
In this, discovery is often difficult and time consuming because it is, all too frequently, an individual effort. There are maybe 2-3 issues that I am passionate about for a sustained period of time; there are maybe 6-12 issues that I have something more than a passing interest but they represent curiosity more than passion; and lastly, there are interesting distractions that are best described as serendipitous discoveries. In some cases, I curated sources to enable this (e.g., pre-Muskian Twitter), in other cases the platform (e.g., YouTube before its algorithm was tuned to maximize user engagement) and its users (e.g., Metafilter and Reddit) supported me.
The thing is algorithms fail to see our interests as existing along a spectrum and the information related to our interests can be interesting (and potentially important) or important (and potentially interesting). There are as many knowledge graphs as there are people and yet, as Newton pointed out, we have yet to find a way beyond search (which is really a wildly inefficient mechanism for discovery) to help us navigate volumes of information that scale well beyond human capacity elegantly.
The Dynamic, Multidimensional Graph of Our Thinking
Our thinking is not an intelligence sitting on top of a large language model. We do not construct our thoughts and insights one word at a time based on every word that we’ve ever read; rather, we’ve chunked interesting and important pieces of information in ways that we can access using our long-term working memory (ref., Scientific American’s “The Expert Mind”). Taken together, these chunks form one or more graphs save for the one small but very important detail that our knowledge graphs contain tacit knowledge or knowledge for which there is no digital artifact.

As a tool, LLM-driven AIs might be comparable to a Wikipedia page: they provide us with useful overviews that might answer a passing question or serve as the point of departure for deeper inquiry.
The problem with both LLM-based AIs and Wikipedia is that, as point of departures for deeper inquiry, the second step leads us into the broken world of search, alone and with tools we’ve that have been stagnant for years.
If we are trying to create deep learning algorithms that are working to supplant search on the way to developing (or at least mimicking) artificial intelligences, then we need to start with the person asking the question, their objectives, their knowledge base (to include its evolution and the logical fallacies and biases at play in it), and where and how it connects with their professional and social graphs. They are not looking for a standalone answer, but a piece of information that contributes to the graph of their knowledge, their understandings of the worlds around them, and their philosophies.
To date, we have failed or are failing to build this type of replacement for search. LLM-driven AIs that are being designed to serve as artificial general intelligences are accruing technical debt based on bad assumptions. In this, the art of (technologically) possible is far less interesting than the reality of what we, as thinking people, desire and need to thrive in the face of overwhelming volumes of information. Search’s successor needs to be qualitatively and demonstrably superior to the tools we use to cultivate and connect our personal knowledge graphs.