This is a widespread problem in Silicon Valley, the beginning of which is typically marked by the 1956 arrival of Nobel Prize winner William Shockley. Shockley was a brilliant semiconductor physicist, but was quite deficient in other areas. For example, his inability to work well with others caused key employees ("The Traitorous Eight") to leave and create their own semiconductor company that quickly eclipsed Shockley's. For example, Shockley embraced eugenics. And as Silicon Valley grew and grew, it seemed to attract and nurture people with the problem of "I'm brilliant in one area. Therefore I am (can be) brilliant in everything." When I was organizing panels/seminars in the 1990s and early 2000s, I would warn the speaker "Don't be surprised if half the people in the audience think they know more about the topic than you, and be prepared for 2 or 3 of them to be right. The difficulty can be determining which is which."
For me, the surprising part of related research was the extent to which people who had expertise/competencies underestimated the amount of time and effort to achieve those skills. Author Malcolm Galdwell popularized the claim that it takes roughly 10,000 hours of practice to achieve expertise in any field (equivalent to 5 years of 40-hour work weeks). Although it is highly dubious that this number is the same across fields and individuals, it is useful as a reminder of the magnitude of the practice and learning. I would speculate that such underestimates are a significant factor in people mis-estimating their advance toward competency in new domains.
Another part of that research found that the awareness of how little one knows was gained relatively early in the learning/training process. One could speculate that learning this is important to learning more. However, these results may be obsolete, or it may be that the news and the Internet make it seem that there are so many people that persist in the belief that they are experts regardless of how much the contrary is demonstrated to them.
Thinking about the variants of this problem involves a closer consideration of what constitutes "expertise", "intelligence" ... However, trying to talk about the components of intelligence is fraught with problems. I learned this in a college course on Cognitive Psychology: It covered multiple research papers each dividing "intelligence" into seven categories, but the different papers had very different categories. The lesson? Seven is the appropriate number of categories to try to fit into a research paper (in addition to it being a magic number). The four terms in the title are not categories, but simply to facilitate this particular discussion.
What constitutes "intelligence" has changed over the years. Not that long ago, "intelligence" was commonly used to describe the knowledge that one possessed. "Clever" was used to describe the ability to use information, both the knowledge that one already possessed and that specific to a given situation (perceptions, inquiry...). "Quick-witted" aligned with raw mental "horsepower", although the ability to separate that from other aspects of intelligence came only recently from MRIs and similar scanning technologies.
Intellectual "skills" involve a class of manipulations of information, and may be relevant to multiple subject areas. Although a reasonable body of knowledge is necessary for "wisdom" in a particular subject area, it is not sufficient. Wisdom is the ability to use knowledge effectively, including what is and isn't important in a given situation, and where the opportunities and traps are.
The contention over the XQ Super School Project application provides several examples.(foot#1) Although the submission did not create a legal commitment on the part of the School District, this is the sort of maneuver that is often intended to create a decisive psychological commitment.(foot#2) This tactic is so well known and widely used that it has names. The generic term is "planting the flag". In Public Works construction, it is known as "driving a stake in the ground".(foot#3) If the proposal group lacked the wisdom to know of this tactic and the likely reaction, do they have the wisdom to create the proposal? If they did know and made the submission anyway, shouldn't this also disqualify them?
The discussion following the revelation of the XQ submission revealed another common fallacy: That certain skills obviate the need for knowledge and wisdom. The classic example of this is from the corporate world where you get some managers who think that management skills are all they need, and they don't need to understand what they are supposed to be managing (technology, markets,...). With many of the School District committees, you get people who say that the analytic skills they learned in the technology or financial industry make them qualified to oversee, if not create, educational policy. It is appalling that they didn't learn one of the basic admonitions for analysts: "Essentially, all models are wrong, but some are useful."(foot#4) That is, in order to reduce the complexity of reality to a level that can be dealt with, you create models that ignore certain aspect--get them "wrong"--because you believe them to be insignificant or irrelevant for that particular situation. Einstein's razor: "Make things as simple as possible, but not simpler". Knowledge of a domain (if not wisdom) are required to know what simplifications can be made and which must be avoided.
In doing analysis, it is important not just to make effective use of what you know, but be aware that there are things you don't know. There is a 4-element matrix: the known-known, the known-unknown, the unknown-unknown, and the unknown-known. The first two we all deal with all the time. The "unknown-unknown" are things we aren't aware that we don't know. The "unknown-known" are things that we know but don't remember at the time of the analysis.(foot#5) The analyst lacking subject matter expertise is at the mercy of those that have it because it shifts the boundaries of the unknowns: known-unknowns can become unknown-unknowns and known-knowns can become unknown-knowns. A good analyst has learned how to cope with the known-unknowns, but unknown-unknowns can be accommodated only by providing a cushion to handle them as they arise. When the subject matter experts fail to provide relevant information to the analyst, those omissions can manipulate or invalidate the result.
This is not just a problem of resident-volunteers, but of consultants. For example, consultants hired by City Hall recommended the location for a new fire station without taking into account the street grid, congestion and whether land would be available at that location. The consultant's recommendation was so flagrantly ridiculous that it was immediately killed.
Another variant of this is common in startups: Technologists think that their raw mental horsepower will allow them to quickly acquire expertise in turning a technology into a product, marketing it, scaling up a company,... The widely cited example where the founders had the wisdom not to fall into this trap is Google (hiring Eric Schmidt). That was in 2001--you can draw your own inferences as to why this example is still in use.
During the DotCom boom, one common diagnosis of the failure of many startups was "lack of adult supervision". The high-tech industry has long had a problem where it values basic knowledge, enthusiasm and willingness to work long hours over wisdom.(foot#6) This has long been a unresolved problem in science and technology: Many of the discussions refer to "two track promotion/career paths" (management and scientific/technical). To get an earful, ask a older engineer or software developer (age 30 may suffice, but 35 has more perspective). Or ask about the balance at various companies between treating technical talent as an investment and as a consumable (like a printer cartridge).
The decline of people being willing to think deeply has been much noted, and may even be true. Part of the diagnosis is that electronic lookup of information promotes two bad habits: (1) to stop looking once you find a short answer, and (2) unfocused follow-up that takes you further and further afield (breadth, not depth). Another diagnosis is that the structure of conversations in social media and on mobile devices (smart phones, tablets) discourages this.(foot#7) Locally, an additional factor is the pursuit of "disruptive change". Many misunderstand this as new approaches obsoleting existing ones, along with their knowledge and skills. In reality, disruptive change comes from recognizing that advances in one aspect of the solutions to a problem have outpaced others, thereby shifting the tradeoffs, and dramatically changing which solutions are best. Notice that it still requires knowledge, skills and wisdom related to the many aspects. Prime examples of this are seen in so-called disruptive technologies where the innovation is in providing ways to subvert existing laws and/or inhibit enforcement (although I personally don't classify these as deserving the description of being disruptive technologies, many do).
In local civic problems, shallow analysis--ideology, generic rules, anecdotal evidence ...--too often prevails and attempts to do more thorough analyses are rejected (for example as being obstructionist). There seems to be a decreasing appreciation of "The devil is in the details" and "The road to hell is paved with good intentions" (unintended consequence). One of the basic conflicts one sees in Palo Alto is between the old-style engineering ethos and that of the "promoter" (marketeer, evangelist, shill, ...). The spectacle of people who proclaim the importance of information and knowledge as a concept while being so disrespectful of it in practice is simultaneously amusing and sad.
Part of the reason that electronic search encourages shallower analysis (mentioned above) is that knowledge is increasingly fragmented, in part because electronic search encourages small, tightly focused chunks. But this pattern spreads beyond knowledge bases, such as the Web, that are designed to be accessed primarily through search. For example, there are a number of blog pieces that I could have usefully written, but I couldn't get the people who were the experts in various aspects of the issue to confirm that what I had drafted on that aspect adequately represented the situation and then agree to help respond to follow-up questions that might arise. Consequently, those people published their information in small chunks spread over time in different places (Palo Alto Online, emails to City Council, video recordings of City Hall meetings,...). And much of that is unlikely to producing a search engine ranking high enough to be seen. And some of it never gets adequately written up: The person says that when someone needs to know, they should just contact him. (Arrrgh!)
I was reminded of this when two neighbors asked for help with electronic devices. Both complained about the lack of any documentation (including online), so you can infer their age. Documentation is not just an expectation of the older generations, but can impact product quality and usability. For example, one place I worked required that a draft Users Guide be produced as part of the design process for information systems. One benefit was that it established a consistent terminology for the user interface. Another was that it identified areas of excessive complexity: If you had problems describing it, the user was likely to have problems understanding how to it, or would use it incorrectly. Yet another was that it revealed omissions and incompatibilities in the design. The software developers hated this because we were impatient to get to coding. At other places I worked, documentation was done at the end, with the inevitable dilemma of how to deal with the problems revealed.(foot#8) The neat solution was to not have any non-trivial documentation: The application was supposed to do whatever it did, the user was responsible for discovering its capabilities, and you could change it without meaningful notice. What surprises me is the number of entrepreneurs developing apps don't understand that, for larger enterprises, the lack of real documentation is both a black mark and a red flag against their product.
- Black mark: Although the end-user may not use the documentation, the people who develop the training materials and staff the internal help desks do, and the absence of documentation substantially raises the total costs of the product.
- Red flag: The absence of documentation indicates deficiencies in the design process that could be potential time-bombs (incorrect results, security vulnerabilities, ...).
Knowledge is not just a heap of individual facts, but the structuring of those facts and the credibility of the assemblage. If you have to be highly skeptical of all the components, the expense of the fact-checking limits either the depth and breadth of that assemblage, or its credibility. This has been a growing concern for years, but most people became desensitized (like the metaphorical frog is slowly warming water). However, the current Presidential primary campaign has highlighted this problem, so there is increasing discussion about it online, including a suggestion that undoing the harm done by social media to public discourse would be a worthy cause for the Zuckerberg-Chan charity. However, there are few hints of irony in using social media as the venue for discussing how social media is damaging discussion.
1. Palo Alto Weekly/Online news and opinion of 2015-12-04:
Undisclosed new-school proposal sparks dissent: Role of superintendent, enrollment subcommittee criticized, defended by Elena Kadvany
Editorial: It's the secrecy, stupid: School superintendent works behind board's back to advance new 'innovative' high school
Guest Opinion: Hitting 're-set' with school enrollment committee by Todd Collins
Guest Opinion: Task force needed to analyze school ideas by Sheena Chin, Natasha Kachenko, Joe Lee, Diane Reklis and Mark Romer
2. Some of the ploys used to prevent changes to such a proposal are to claim
- that changing it at that stage would destroy credibility with the funder, or
- that there isn't enough time to do it properly, or
- that redoing the portion done secretly would would take effort away from refining the proposal and thereby make it weaker and less likely to be funded, or
- that it would be slap-in-the-face of well-intentioned volunteers, or
3. "Driving a stake": The standard practice for getting public approval for public works projects is to knowingly present low estimates of the cost, waiting until after construction has begun to up the estimates, typically in a series of increments (sound familiar?). But by that time, there has been enough psychological, financial and political commitment to the project that it is extremely hard to cancel it. The phrase refers to marking where to dig.
4. Quote by George E. P. Box, a statistician working in the areas of Bayesian inference, experimental design and quality control. From his book Empirical Model Building and Response Surfaces.
5. "If only HP knew what HP knows, we would be three times more productive", Lew Platt, then-CEO of HP.
In a 2002 briefing as Secretary of Defense, Donald Rumsfeld popularized this terminology, but missed the unknown-knowns component.
6. Tracy Kidder's 1981 book "The Soul of A New Machine", which won a Pulitzer Prize, described the crash effort to develop a new computer (a early mini-computer). It ended with the engineers being sidelined, being tasked to do maintenance and updates on that model, while new engineers were brought in to develop subsequent models.
I don't recommend this book--I read it when it came out and although it was interesting, it wasn't good enough to recommend then (the author seemed to remain too distant from his subjects to have critical insights--he was an observer/chronicler who achieved only limited understanding). However, the title can provide a useful hook for finding other discussions of the trend.
7. I have encountered lots of mentions of this argument, but none good enough to link to. Articles by Sherry Turkle promoting her current book allude to this, and some of the online comments take it a little further. Note: I do not recommend the Turkle book (my Amazon review).
8. In "ancient" times, one of the ways of acknowledging bug it a program that we couldn't fix before deadline was to declare "It's not a bug, it's a feature!" By the end of the DotCom boom (circa 2001), many newly minted software developers didn't have even a mild sense of embarrassment about this.
An abbreviated index by topic and chronologically is available.
The Guidelines for comments on this blog are different from those on Town Square Forums. I am attempting to foster more civility and substantive comments by deleting violations of the guidelines.
I am particularly strict about misrepresenting what others have said (me or other commenters). If I judge your comment as likely to provoke a response of "That is not what was said", don't be surprised to have it deleted. My primary goal is to avoid unnecessary and undesirable back-and-forth, but such misrepresentations also indicate that the author is unwilling/unable to participate in a meaningful, respectful conversation on the topic.
If you behave like a Troll, don't waste your time protesting when you get treated like one.