How to Get What You Want Out of the Internet
None of the first 20 "queries" on Google's ranked list of popular searches are more than two words. You've got to go pretty deep before you find the most popular question. Do you want to know what it is? It's okay — I won't make you go look it up. The most popular thing people ask Google is NOT "how does gravity work?" or "why do I feel this way?" The number one question — and this is hard-sourced data — is: what day is it?
This means that "the present" is the most sought-after truth the internet can provide (though being present often requires being off the internet…), but what if your present calendar date shows a big Physics exam tomorrow and you need to actually learn something? Or maybe today calls for a job search because the bills aren't going to pay themselves. What questions should you ask then?
Most people type what they want to know and something (hopefully relevant) shows up. That is easy to do — for movie times and pizza places — but it doesn't work when you're not entirely sure how to describe what it is you want, and we have all been there. This page is about asking the right questions, which, put another way, means that it is about accurately expressing your desires. What a weighty way to think of a question mark, eh?
Step 1: Define the Question
Before you type anything, ask yourself: what is actually happening here? What is this curious thing happening, and what about it makes me care to know more? For example, say you notice that people seem to believe things they've heard more than things they have not, even when the "heard" thing is completely wrong. Formulate your question before firing away. "Do people believe stuff if you say it enough?" Ah, that's the "illusory truth effect," apparently. Scientists gave that an actual name years ago. Now you have a question number two you may pursue because you know what that phenomenon is called. Searching that name with "academic publication" finds even more prominent (and sourceworthy) academic work on the subject.
Some questions contain multiple parts, even ones you think are simple. Most real phenomena have two things working together, not one. Instead of just searching "Why do first impressions matter so much?" and getting sucked into reddit threads, search "how do first impressions affect the brain?" and get answers that are worth handing in to a professor. Turns out that the answer has two parts: the brain forms snap judgments (called thin-slicing), and early information sticks more than later information (called primacy effect). Search either individually to extend the rabbit hole if you wish.
The rule: Accurate description results in accurate results.
Step 2: Decide where the answer is allowed to live.
Before searching, decide: what kind of source would actually know this? Medical questions: nih.gov, pubmed.ncbi.nlm.nih.gov. Psychology research: apa.org, pmc.ncbi.nlm.nih.gov. Legal questions: official .gov sites. Historical facts: university archives and library databases. Add the site to your search before you run it. Type: illusory truth effect site:nih.gov. Now you're not searching the whole internet — you're searching the part where actual researchers publish. You've pre-filtered for quality before a single result appears. It's like the difference between asking the whole cafeteria "does anyone understand the math homework?" versus walking directly to the kid who always gets a hundred. Same question. Completely different quality of answer.
Step 3: Read the source, not the summary.
When you find a real paper or study, click through to it. Don't settle for an article that says "according to researchers…" or (worse) just peruse the AI summary. Find the actual researchers. Read what they said. Both popular publications and AI summaries compress, simplify, and sometimes invent. It's like playing a game of telephone: Sara said "blue water" but, by the time it gets to you, the phrase has become "new feature" or, to quote The Simpsons, "Purple Monkey Dishwasher" (oh Ralphie). The original source is the place to go.
Note: Zero results means room for innovation
A good search can fail in a useful way. If you search "Do studies show video games improve memory", find nothing, and walk away disappointed, you haven't learned anything. However, nothing supporting that idea means something. The question was specific enough that getting zero results is real information. It means that there is room for innovation, if nothing else, and can help determine your next research project.
The short version
Bad searching: Type the question → read the first result → believe it.
Good searching: Search specifically to name the mechanism precisely → restrict the search to valid sources → run the search → read the source. If zero results return, that means you have found a question that your research may answer — go innovate!
Source List: On Asking Good Questions
Plato — The Apology (~399 BCE)
Socrates' trial testimony. The original source on disciplined questioning — knowing what you don't know as the beginning of knowledge, not the end of it.
Project Gutenberg: gutenberg.org/files/1656/1656-h/1656-h.htm
Internet Classics Archive: classics.mit.edu/Plato/apology.html
Plato — Meno (~385 BCE)
The dialogue that introduces the Socratic Method as a teaching tool — guiding a student to knowledge through questions rather than telling them the answer.
Project Gutenberg: gutenberg.org/files/1643/1643-h/1643-h.htm
Internet Classics Archive: classics.mit.edu/Plato/meno.html
Francis Bacon — Novum Organum (1620)
The founding document of the scientific method. Bacon's four "Idols" are a taxonomy of bad questioning — the ways our minds lie to us before we even open a browser.
Project Gutenberg: gutenberg.org/ebooks/45988
Constitution.org: constitution.org/bacon/nov_org.htm
Richard Feynman — "Cargo Cult Science," Caltech Commencement Address (1974)
Feynman on the difference between the form of scientific inquiry and the actual substance of it. The hardest part of asking a good question is checking whether you're actually asking it.
Caltech Library (canonical): calteches.library.caltech.edu/51/2/CargoCult.htm
Friestad, M. & Wright, P. — "The Persuasion Knowledge Model" (1994)
Journal of Consumer Research, Vol. 21, No. 1, pp. 1–31. When people sense persuasion attempts, they discount the source automatically. Applies directly to why search results have a trust problem baked in.
DOI (permanent): doi.org/10.1086/209380
McDaniel, M.A. et al. — "Learning Introductory Biology" (2021)
CBE—Life Sciences Education. Students who learn from abstract principles outperform students who only learned from examples when they hit a new problem. Steps 1–2 above are about abstraction. This is the research on why that works.
Full text (free): lifescied.org/doi/10.1187/cbe.21-12-0335
Price, A.M. et al. — "Expert Problem-Solving Process in Science and Engineering" (2021)
CBE—Life Sciences Education, 20(3), ar43. Expert problem solvers build a predictive framework — identifying the key features of a problem and how they relate — before they ever start searching for a solution. The good search isn't a query. It's a framework.
PMC (free): pmc.ncbi.nlm.nih.gov/articles/PMC8715817/
CBE direct: lifescied.org/doi/abs/10.1187/cbe.20-12-0276