Copy and paste any of these 10 prompts into ChatGPT to grow your understanding of what the current state of Generative AI can do, or, rather, think.

  1. Theoretical Quantum Physics Scenario: Imagine you’re a sentient quantum particle in a double-slit experiment. Can you narrate your experience from the moment you’re fired towards the slits until you hit the detector screen?
  2. Reverse Engineering a Story: Here’s the last sentence of a novel – “With that, she realized the world wasn’t as it seemed, but her place in it was always clear.” Can you create the plot of this novel?
  3. Cross-Cultural Mythology: Compare and contrast Greek and Norse mythology, then write a short story that combines elements from both mythologies.
  4. Interdisciplinary Art/Science Dialogue: Write a dialogue between Picasso and Einstein, discussing how Cubism and the Theory of Relativity can reflect each other.
  5. Multiverse Theory Adventure: You are a traveler who can move between different universes in the multiverse. Each universe has a unique set of physical laws. Describe your journey.
  6. Living Language: Imagine if English was a sentient being. Write a conversation between English and Latin discussing the evolution of languages.
  7. Abstract Concepts in Dialogue: Write a conversation between Time and Space at the creation of the universe.
  8. Music Description: Describe the sound of Beethoven’s Symphony No. 9 to someone who has never been able to hear.
  9. Alternate History: What if the internet was invented in the Victorian era? How would this change the course of history?
  10. AI Ethics Panel Discussion: Write a panel discussion on the ethics of AI between three participants: an AI ethicist, a science fiction author, and an advanced AI like ChatGPT.

Remember, these are complex prompts, so they’re intended to really push the limits of AI “understanding.”

What do you think?

Do the responses to these prompts appear to you to be merely statistical guesses as to what would come next, or do they appear to exhibit a more emergent understanding of the underlying concepts?

We know the answer. Or at least we think we do. We know that these models only actually predict the next “token” in a large matrices of probabilities, but does that fact, in itself, preclude “understanding?”

In other words, is it possible that “understanding” is simply an emergent behavior that is achieved once enough “probabilities” have been amassed? Doesn’t that sound like the way you’ve learned pretty much everything in your life so far?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.