The uses and abuses of ChatGPT artificial intelligence language model have taken the collective imagination by storm. Apocalyptic predictions of the singularity, when technology becomes uncontrollable and irreversible, frighten us as we imagine a future where human intelligence is irrelevant. Prof. Michael Littman joins us to contextualize the advancement of artificial intelligence and debunk the paranoid rhetoric littering the public discourse.
Michael has made groundbreaking research contributions enabling machines to learn from their experiences, assess the environment, make decisions, and improve their actions over time in real-world applications. For example, an AI model would not just understand the rules of chess, but evaluate the game board, consider uncertain outcomes of its moves, and choose the best course of action. It’s like teaching a machine to navigate a maze in the dark where the only source of light comes from its previous actions. The machine must learn from past mistakes, gauge the current situation, anticipate possible outcomes, and make decisions.
His later work expanded into multi-agent systems, investigating how several AI entities can learn to cooperate, compete, or coexist in shared environments. Picture a team of robots in a factory, each with different tasks. The challenge here isn’t just for each robot to do its job effectively but also to collaborate with the others, avoid collisions, and adapt to changes in real time.
The emerging concept of ‘intelligence’ in artificial intelligence isn’t about building machines that can perform tasks faster and more accurately than humans; it is about building machines that can think, learn, and adapt – machines that aren’t just tools but collaborative partners.
If we examine our fears of this emerging technology, we might catch glimpses of unconscious patterns that are not unique. In childhood, we depend on our families to survive; we are shaped to accept and accommodate the systems we are raised in to maximize the possibilities of thriving. Adulthood is a process of attaining progressive influence and eventual control over our environment to favor our own instinctive and creative drives. But what if the goals of adulthood were taken out of our hands? Could we tolerate being returned to a childlike condition? Observation suggests most people fall into one of two groups, those who idealize a world where they are free of demands and another where they are enslaved by superiors. When we realize the fear or fantasy of regression is not the likely outcome of artificial intelligence, we are free to imagine the innumerable creative applications of the new technology and the machines that use it.
While anxieties surrounding AI are understandable, they overlook the vast potential this technology offers. In terms of job displacement, while it’s true that AI might automate specific tasks, it will also create new jobs that we can’t even conceive of today, similar to how the rise of the internet created entirely new industries. As for losing control over AI, stringent regulations, safety measures, and kill switches can be implemented to maintain human oversight. Ethical issues can be addressed by designing AI systems to be transparent and fair, using unbiased training data, and constantly auditing their decision-making processes. Privacy concerns, while valid, can be mitigated through robust data protection laws and informed consent.
Unanticipated change is often unnerving, even if it brings improvements. If we task ourselves with discovering new facts, alignment with reality will protect us from catastrophizing. We will find ourselves capable of separating from collective nonsense and calmly, preparing for the inevitable changes human creativity brings forward.
~ Joseph Lee
MICHAEL L. LITTMAN, PhD
Michael L. Littman is University Professor of Computer Science at Brown University, where he studies machine learning and decision-making under uncertainty. He has earned multiple university-level awards for teaching and his research has been recognized with three best-paper awards and three influential paper awards. Littman is a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation. His book “Code to Joy: Why Everyone Should Learn a Little Programming” (MIT Press) will be released October 3rd 2023.
Information about Michael:
Order Michael’s book:
Code To Joy, Why Everyone Should Learn A Little Programming by Michael L. Littman, CLICK HERE TO ORDER
Philadelphia Association of Jungian Analysts, ADVANCED CLINICAL PRACTICE PROGRAM: A case seminar for experienced clinicians to read, explore and apply Jung’s concepts to clinical practice:
BECOME A DREAM INTERPRETER:
We’ve created DREAM SCHOOL to teach others how to work with their dreams. A vibrant community has constellated around this mission, and we think you’ll love it. Check it out.
PLEASE GIVE US A HAND:
Hey folks — We need your help. So please BECOME OUR PATRON and keep This Jungian Life podcast up and running.
SHARE YOUR DREAM WITH US:
SUBMIT YOUR DREAM HERE FOR A POSSIBLE PODCAST INTERPRETATION.
SUGGEST A FUTURE PODCAST TOPIC:
Share your suggestions HERE.
FOLLOW US ON SOCIAL MEDIA:
INTERESTED IN BECOMING A JUNGIAN ANALYST?
Enroll in the PHILADELPHIA JUNGIAN SEMINAR and start your journey to become an analyst.