pixel

FRIEND or FOE: The AI Debate with Michael L. Littman, PhD

Aug 3, 2023

VIDEO VERSION

AUDIO VERSION

The uses and abuses of ChatGPT artificial intelligence language model have taken the collective imagination by storm. Apocalyptic predictions of the singularity, when technology becomes uncontrollable and irreversible, frighten us as we imagine a future where human intelligence is irrelevant. Prof. Michael Littman joins us to contextualize the advancement of artificial intelligence and debunk the paranoid rhetoric littering the public discourse.

Michael has made groundbreaking research contributions enabling machines to learn from their experiences, assess the environment, make decisions, and improve their actions over time in real-world applications. For example, an AI model would not just understand the rules of chess, but evaluate the game board, consider uncertain outcomes of its moves, and choose the best course of action. It’s like teaching a machine to navigate a maze in the dark where the only source of light comes from its previous actions. The machine must learn from past mistakes, gauge the current situation, anticipate possible outcomes, and make decisions.

His later work expanded into multi-agent systems, investigating how several AI entities can learn to cooperate, compete, or coexist in shared environments. Picture a team of robots in a factory, each with different tasks. The challenge here isn’t just for each robot to do its job effectively but also to collaborate with the others, avoid collisions, and adapt to changes in real time.

The emerging concept of ‘intelligence’ in artificial intelligence isn’t about building machines that can perform tasks faster and more accurately than humans; it is about building machines that can think, learn, and adapt – machines that aren’t just tools but collaborative partners.

If we examine our fears of this emerging technology, we might catch glimpses of unconscious patterns that are not unique. In childhood, we depend on our families to survive; we are shaped to accept and accommodate the systems we are raised in to maximize the possibilities of thriving. Adulthood is a process of attaining progressive influence and eventual control over our environment to favor our own instinctive and creative drives. But what if the goals of adulthood were taken out of our hands? Could we tolerate being returned to a childlike condition? Observation suggests most people fall into one of two groups, those who idealize a world where they are free of demands and another where they are enslaved by superiors. When we realize the fear or fantasy of regression is not the likely outcome of artificial intelligence, we are free to imagine the innumerable creative applications of the new technology and the machines that use it.

While anxieties surrounding AI are understandable, they overlook the vast potential this technology offers. In terms of job displacement, while it’s true that AI might automate specific tasks, it will also create new jobs that we can’t even conceive of today, similar to how the rise of the internet created entirely new industries. As for losing control over AI, stringent regulations, safety measures, and kill switches can be implemented to maintain human oversight. Ethical issues can be addressed by designing AI systems to be transparent and fair, using unbiased training data, and constantly auditing their decision-making processes. Privacy concerns, while valid, can be mitigated through robust data protection laws and informed consent.

Unanticipated change is often unnerving, even if it brings improvements. If we task ourselves with discovering new facts, alignment with reality will protect us from catastrophizing. We will find ourselves capable of separating from collective nonsense and calmly, preparing for the inevitable changes human creativity brings forward.

~ Joseph Lee

MICHAEL L. LITTMAN, PhD

Michael L. Littman is University Professor of Computer Science at Brown University, where he studies machine learning and decision-making under uncertainty. He has earned multiple university-level awards for teaching and his research has been recognized with three best-paper awards and three influential paper awards. Littman is a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation. His book “Code to Joy: Why Everyone Should Learn a Little Programming” (MIT Press) will be released October 3rd 2023.

Information about Michael:

WEBSITE

Order Michael’s book:

Code To Joy, Why Everyone Should Learn A Little Programming by Michael L. Littman, CLICK HERE TO ORDER

Philadelphia Association of Jungian Analysts, ADVANCED CLINICAL PRACTICE PROGRAM: A case seminar for experienced clinicians to read, explore and apply Jung’s concepts to clinical practice:

CLICK HERE FOR INFORMATION

BECOME A DREAM INTERPRETER:

We’ve created DREAM SCHOOL to teach others how to work with their dreams. A vibrant community has constellated around this mission, and we think you’ll love it. Check it out.

PLEASE GIVE US A HAND:

Hey folks — We need your help. So please BECOME OUR PATRON and keep This Jungian Life podcast up and running.

SHARE YOUR DREAM WITH US:

SUBMIT YOUR DREAM HERE FOR A POSSIBLE PODCAST INTERPRETATION.

SUGGEST A FUTURE PODCAST TOPIC:

Share your suggestions HERE.

FOLLOW US ON SOCIAL MEDIA:

FACEBOOK, INSTAGRAM, LINKEDIN, TWITTER, YOUTUBE

INTERESTED IN BECOMING A JUNGIAN ANALYST?

Enroll in the PHILADELPHIA JUNGIAN SEMINAR and start your journey to become an analyst.

YES, WE HAVE MERCH!

Shop HERE

3 Comments

  1. Gwendolyn O Murphy

    At this point in the discussion, I am drawn to comment on how, in a world of neurodiversity, these ideas might resonate with “the double-empathy issue” or more generally differences among humans in information-processing. Different doesn’t mean “not-human.” Except when it does.

    Reply
  2. Mamie Allegretti

    Hello Joseph, Lisa and Deb,
    I often think of this quote by Jung which is still relevant today and probably will be forever!
    “Everything possible has been done for the outside world.: science has been refined to an almost unimaginable extent, technical achievement has reached an almost uncanny degree of perfection. But what of man, who is expected to administer all these blessings in a reasonable way? He has simply been taken for granted. No one has stopped to consider that neither morally nor psychologically he in any way adapted to such changes. (CW 10, para. 442)
    Thanks for another great episode and all your work.

    Reply
  3. Thomas Gitz-Johansen

    I really enjoyed the talk with Prof. Michael Littman. Thanks 🙂 However, while he (probably rightly) dismisses the danger of AI’s attacking humans Terminator-style or HAL 9000-style, he does not sufficiently dwell on the much more immediate danger to many many people’s job and income. Writers, artists, musicians, translators, etc. I work in university, and ChatGPT is making it nearly impossible to spot if students have written their own papers or just run it through a text-AI. If the paper is really godo and smoothly written it’s likely to be an AI, but you can’t really prove it (except for errors in use of citations, which the program is not so good at … yet). So, on a much more mundane level, are these programs creating problems for more people than they do any good? They probably make some tech-people rich, but what are the consequences to other people? As you asked during the interview: Will the AIs put us (therapists) out of work? Luckily, since therapy has a lot to do with emotional right brain-to-right brain processes it may be impossible for a computer to do actual therapy. But they may do something close enough to make people use it (which, again, is peraps fine for people who cannot actually afford real therapy).

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *