The paradigm of knowledge keeps changing. In the beginning, it was the era of know-how. An individual’s experience and mastery were the greatest assets, passed down only within a person’s head or a limited part of an organization. The central question in that era was naturally “How do we implement this?” Anyone who could answer that how well was considered a valuable talent and central to work.
Then time passed through the eras of know-what and know-where. As information became easier to find and more high-quality content was created, developers could get a lot of work done simply by searching Google, Stack Overflow, and other sources. Still, the core was know-how — because the quality of information was high enough that finding it generally meant it was reliable.
But now that period is becoming the era of “Know-Right.” (I’m not sure if this term really exists or is correct — I’m using it for my own thinking…) Information can be composed easily (I think “composed” is more accurate than “searched”), but now we must verify the reliability of that information. (I’m using AI assistance to write this, so you should verify this text’s reliability too.)

Here is a simple summary in a table:
| Category | Know-how | Know-what | Know-where | Know-Right |
|---|---|---|---|---|
| Focus Era | 1990s ~ mid-2000s | late 2000s ~ mid-2010s | late 2010s ~ early 2020s | mid-2020s ~ (AI era) |
| Core Question | How do I implement this? | What should I use? | Where is the answer? | Is this answer really correct? |
| Location of Knowledge | Personal experience, internal docs | Official docs, patterns, best practices | Search engines, GitHub, Q&A | Context, constraints, system understanding |
| Developer Strength | Skilled hands-on ability | Correct selection ability | Fast searching ability | Judgment, validation, reasoning ability |
| Learning Method | Repetition, trial-and-error | Case study, comparative learning | Optimized search, reference tracing | Deep understanding + cross-validation |
| Cause of Failure | Lack of experience | Wrong choices | Copy-paste lacking context | Accepting without verification |
| Role of AI | Almost none | Reference tool | Powerful searcher | Both a subject of judgment and a tool |
So how does this change the way we think? For example, think about a simple Redis failure problem:
| Era | Developer Reaction |
|---|---|
| Know-how | “We tuned it this way before and fixed it.” |
| Know-what | “This issue is caused by KEYS; use SCAN instead.” |
| Know-where | “There’s a solution in the official docs and on GitHub.” |
| Know-Right | “For this workload, SCAN is also risky, and the real problem is the data model.” |
When ChatGPT first came out, things weren’t this extreme, but the pace of change keeps increasing and the importance of verification is growing. Now the question arises: So how should people prepare for this? Does it necessarily mean that humans must verify everything?
Some people say that, just as we don’t verify every line of machine code the compiler generates and managers don’t review every result a team member produces, we should accept a certain level of mistakes. That perspective isn’t wrong. But still, a single mistake can cause a huge problem — and while humans can sometimes understand their own mistakes, AI mistakes might be unknowable. (There is no definitive answer here — in a few years this discussion itself might be meaningless.)
One technique to improve LLM reliability is test-time scaling: using a smaller model to evaluate what the larger model generates and filtering out responses below a reliability threshold — effectively enhancing performance by throwing out low-confidence results. This resembles using other models (or smaller ones) to verify output, and it shows why evaluating trustworthiness of results is becoming more crucial for people who use this information.
So my question becomes: How should “people” be educated in this era? …I’ll leave that discussion for another time (maybe — I’m still thinking about it).






