Ryan and I were chatting about a new formula for intelligence by Alex Wissner and discussed a potentially overlooked part in the equation unique to humans. Simply, we must fulfill underlying needs before we can optimize future freedom of action meaning a given individual may have a different starting point with a series of unique challenges. There is an incredible range an individual may start from before reaching her full potential.
It takes a very different type of intelligence to optimize opportunities over the long-term than the short-term. If we limit the opportunities we have today, for the sake of long-term potential, is that an "intelligent" decision? Let's take the following two scenarios.
One: Intelligence is often used to optimize monetary resources, that can then be used to purchase opportunities. Much of the investing we do is based off this concept - but in a temperamental manner. People buy when it's expensive & sell when it's cheap. To go against the norm and make a big bet on something while everyone else is running away is noted as "stupid" (at the time). If it does pay out there's a foot-in-mouth moment and they're revered as genius investors. I often found these scenarios a bit skeptical because it only takes one outlier case (or "luck") to catapult someone (no matter the intelligence) into infinite future opportunities.
Two: In the opposite scenario, "street smarts" is a specific form of intelligence related to instincts or thinking on your feet, often more essential to increase near-term opportunities. We also associate this intelligence with individuals in lower socioeconomic environments. Alex's theory assumes near-term realities for many of these individuals are nonexistent in order for his theory to work - we live in a industrialized society and increased the standard of living to a point where the lower tiers of Maslow's Hierarchy is assumed. As such, intelligence related to optimize these lower tiers isn't as valuable and absent from Alex's theory.
Let's use this individual, Annie, that grew up in a low-income household, school district, and culture. A robot can play a game where everyone is guided by the same rules and tools whereas Annie has very unique restrictions than other individuals her age. The robot Alex cites in his work is able to systematically optimize future freedom of action because it functions in the highest level of Maslow's hierarchy. Humans, on the other hand, do not live within these exact conditions. As such, there are more immediate needs humans must satisfy before reaching their intellectual potential.
A recent Brookings report made waves trying to understand why high-achieving low-income students don't apply to top colleges when these institutions often end up costing less for low-income students. Alex's theory adapted to Annie assumes she isn't intelligent enough to leverage her academic success for the greatest future opportunities. Yet, it's less that Harvard isn't valued as a great way to increase future opportunities; going to Harvard just isn't in Annie's realm of possibility. Today Annie needs to think about how she's getting home from school, finding something to eat, and fulfilling her social or belonging needs. Annie must balance her unique set of short vs. long-term constraints and opportunities.
Fortunately, the U.S. has inserted a somewhat consistent public education system for all students. By attending school, Annie consumes food as well as an education. The public education system is a way for us to equalize the start point for individuals. More and more we are trying to provide the lower rings of Maslow's hierarchy in the hopes that everyone can focus on maximizing their future freedom of action.
In a similar way, Annie must go against what her environmental norm is - everyone else is synchronized in their decisions to optimize future opportunities in the same way. She needs to act out of the norm of her direct surroundings and make a potentially controversial decision. The coveted yet unlikely rags to riches story or "American Dream" is a unicorn that we don't even bother to incorporate into our own models of intelligence.