About this Book
Unconscious habits and decision-making shortcuts play a crucial role in shaping human behavior, with technology and AI increasingly influencing our choices. Psychologists like Kahneman and Tversky identified critical factors, such as representativeness, availability, and anchoring, that affect how decisions are made. As AI systems become more integrated into daily life, concerns about their transparency and ethical implications grow. In his exploration, Jacob Ward raises significant questions about the potential dominance of profit-driven technology over personal agency. Striking a balance between instinctive reactions and thoughtful decision-making is vital for maintaining a healthier relationship with these emerging technologies and ensuring they serve humanity rather than control it.
2022
Self-Help
Computer Science
09:19 Min
Conclusion
7 Key Points
Conclusion
Unconscious habits shape decisions, often influenced by technology. Awareness of biases and the importance of critical thinking can help individuals regain control over their choices, countering manipulative practices in business and the complexities of AI in daily life.
Abstract
Unconscious habits and decision-making shortcuts play a crucial role in shaping human behavior, with technology and AI increasingly influencing our choices. Psychologists like Kahneman and Tversky identified critical factors, such as representativeness, availability, and anchoring, that affect how decisions are made. As AI systems become more integrated into daily life, concerns about their transparency and ethical implications grow. In his exploration, Jacob Ward raises significant questions about the potential dominance of profit-driven technology over personal agency. Striking a balance between instinctive reactions and thoughtful decision-making is vital for maintaining a healthier relationship with these emerging technologies and ensuring they serve humanity rather than control it.
Key Points
Summary
"Unconscious habits†shape experiences.
After World War I, many injured men went to Austrian medical clinics. They saw the world in strange ways. Some noticed things most people ignore.
Austrian doctor Otto Pötzl studied these patients. In 1917, he wrote about one patient. He explained how the brain gets and uses information, even if we don't realize it. Later, research showed that our brain makes our reality from a lot of information.
The brain isn't a "closed system." It can unconsciously rebuild perceptions from all the senses and absorb and convey emotions without us realizing it. Scientists are finding out about the unconscious habits and tendencies that influence how we act and decide. While this field of science is still new, people in politics and business are already using these patterns to control behavior.
"Three key rules shape decisions, impacting technology's control over lives."
Between 1971 and 1979, psychologists Daniel Kahneman and Amos Tversky wrote papers that were very important for the field of "behavioral guidance." This field tries to influence how people make decisions. In a 1974 essay, they talked about how people make decisions without realizing it and mentioned three things that affect decision-making: "representativeness," "availability," and "anchoring."
Representativeness means linking certain traits to categories, which can lead to misjudgments, like assuming someone is a doctor. Availability suggests that easily remembered events are seen as more likely, such as crime, but remembering a crime doesn't mean it will happen again. Anchoring is when initial probabilities affect estimations, regardless of later evidence.
The Discovery and Impact of Hindsight Bias and Human Irrationality on Decision-Making
Other shortcuts are used. Baruch Fischhoff, a student working with Kahneman and Tversky, discovered "hindsight bias": People think things that happened were more likely than they were. Fischhoff's student, Paul Slovic, started Decision Research. They studied how people think about the risk of bad things happening. Slovic found that feelings, like representativeness and availability, play a role in how people decide.
This research helped find the systematic "patterns of human irrationality" that affect decision-making. People often make irrational decisions, and there are technologies being designed to take advantage of these weaknesses that people find hard to resist. This is part of what Ward calls the first loop.
"Guidance systems†control seemingly free human behavior.
Robots find it hard to do simple human tasks. For example, at a competition sponsored by DARPA, robots couldn't climb a ladder or do complex missions. The problem is they need a lot of data to do these things, and humans have to help them process all that data consciously. But humans don't work that way.
In 2000, psychologists Keith Stanovich and Richard West proposed that human minds work with two systems, "System 1" and "System 2." System 1 makes quick, unconscious decisions, while System 2 handles more thoughtful, analytical decisions. Most judgments rely on System 1, but System 2 oversees its functioning. Failures in rational decision-making often involve both systems, with System 1 causing the failure and System 2 failing to notice it.
People's brains can make them believe they have free will, but their actions are often guided by unconscious forces or "guidance systems." Even though people's choices are not as free as they think, they still judge and criticize others whose behavior is influenced by uncontrollable factors like poverty or addiction. Humans have created technologies and businesses that take advantage of these weaknesses, promoting activities like substance abuse, where people find it hard to assess the costs and benefits.
Businesses manipulate behavior with tech, still trusted.
People don't like uncertainty and prefer reassurance. They often ignore lessons from the past because their emotions control their decisions. This natural tendency can be helpful from an evolutionary standpoint.
Entrepreneurs know they can use these natural tendencies in business. Companies are always trying to find better ways to influence how people behave. They use digital technologies to understand and copy unconscious patterns for their customers. While these methods might seem manipulative, many people trust and believe in them.
Facebook and Google, for example, use really powerful computers and clever “algorithmsâ€. When systems are too complex, it's hard for people who work with them to think critically. The part of our brain that corrects mistakes can get messed up easily. For a long time, people have believed what machines tell them without good reason. Ward thinks this is the main issue with the second loop.
People misunderstand AI's nature and power.
In the mid-1950s, AI was invented as a big change in how technology affects people's lives.
AI isn't about making a robot brain copy the human mind. It's about any system that learns from data to do a job. AI learns in three main ways. In "Machine learning," guesses what will happen based on patterns in existing data. In "supervised learning," it finds patterns in data that are correctly labeled to guess what will happen next. In "reinforcement learning," it gets rid of wrong data, keeps the right data, and spots patterns in the right data.
AI can worry us because we often trust whatever it tells us. Our critical thinking mind, System 2, tends to rely on our emotions and unconscious patterns, known as System 1, rather than making decisions on its own.
The Importance of Transparency and Explainability in AI Systems
AI systems need an "objective function," which is the task humans want them to do. How well these systems follow their objective function is important. Even though AI systems give answers we tend to believe, they don't show us how they got these answers, making them mysterious to most people.
Machine learning systems operate without transparency, which is fine for simple tasks but raises ethical concerns as AI tackles complex human tasks like selecting the best person for a job. Some suggest more transparency or "explainability" in AI, but this poses significant technical challenges.
Tech dominance may lead to forgotten preferences and communication skills.
"Pattern-recognition technology" has the power to influence human life globally and at every level. Algorithms, for example, can determine your food, drink, clothing, and even entertainment choices. As these algorithms start interacting, they could shape an entire human life. This convergence of algorithms may restrict human freedom and highlight people's unconscious urges.
When AI becomes a big part of your life, it tends to affect everything else too. Take the COVID-19 pandemic, for example. At the start, health officials talked about a COVID-19 "passport" for people with a negative test or vaccine. Google and Facebook even planned an app using Bluetooth to tell people if they'd been near others with the virus.
Then, a company used AI to make drones that could spot people with COVID-19 symptoms from above. These systems could collect more than just health data, like heart rate and skin tone. And they could be used for things other than health, too. For example, a police department wanted to work with a company to use AI for surveillance in a town near New York City. The people there had no idea they were being watched. And there are no rules limiting what the police or others can do with this data.
Share: