actorartathleteauthorbizcrimecrosspostcustomerservicedirectoredufoodgaminghealthjournalistmedicalmilmodpostmunimusicnewsworthynonprofitotherphilpolretailscispecialisedspecializedtechtourismtravelunique

We are PhD students from Harvard University here to answer questions about artificial intelligence and cognition. Ask us anything!

Jul 29th 2017 by SITNHarvard • 33 Questions • 8011 Points

EDIT 3

Thank you everyone for making this so exciting! I think we are going to call it a day here. Thanks again!!

EDIT 2:

Thanks everyone for the discussion! Keep the discussion going! We will try to respond to some more questions as they trickle in! A few resources for anyone interested.

Coding:

Introduction to Programming with CodeAcademy.

More advanced Python programming language (one of the most popular coding languages).

Intro to Computer Science (CS50)

Machine learning:

Introduction to Probability (Stat110)

Introduction to Machine Learning

Kaggle Competitions - Not sure where to start with data to predict? Would you like to compete with other on your machine learning chops? Kaggle is the place to go!

Machine Learning: A Probabilistic Perspective - One of the best textbooks on machine learning.

Code Libraries:

Sklearn - Really great machine learning algorithms that work right out of the box

Tensorflow (with Tutorials) - Advanced machine learning toolkit so you can build your own algorithms.

Hello Redditors! We are Harvard PhD students studying artificial intelligence (AI) and cognition representing Science in the News (SITN), a Harvard Graduate Student organization committed to scientific outreach. SITN posts articles on their blog, hosts seminars, creates podcasts, and meet and greets with scientists and the public.

Things we are interested in:

AI in general: In what ways does artificial intelligence relate to human cognition? What are the future applications of AI in our daily lives? How will AI change how we do science? What types of things can AI predict? Will AI ever outpace human intelligence?

Graduate school and science communication: As a science outreach organization, how can we effectively engage the public in science? What is graduate school like? What is graduate school culture like and how was the road to getting here?

Participants include:

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. He has published work on genetic regulation but is currently using machine learning to model animal behavior.

Dana Boebinger is a PhD candidate in the Harvard-MIT program in Speech and Hearing Bioscience and Technology. She uses fMRI to understand the neural mechanisms that underlie human perception of complex sounds, like speech and music. She is currently working with both Josh McDermott and Nancy Kanwisher in the Department of Brain and Cognitive Sciences at MIT.

Adam Riesselman is a PhD candidate in Debora Marks’ lab at Harvard Medical School. He is using machine learning to understand the effects of mutations by modeling genomes from plants, animals, and microbes from the wild.

Kevin Sitek is a PhD candidate in the Harvard Program in Speech and Hearing Bioscience and Technology working with Satra Ghosh and John Gabrieli. He’s interested in how the brain processes sounds, particularly the sound of our own voices while we're speaking. How do we use expectations about what our voice will sound like, as well as feedback of what our voice actually sounds like, to plan what to say next and how to say it?

William Yuan is a graduate student in Prof. Isaac Kohane's lab in at Harvard Medical School working on developing image recognition models for pathology.

We will be here from 1-3 pm EST to answer questions!

Proof: Website, Twitter, Facebook

EDIT:

Proof 2: Us by the Harvard Mark I!

Q:

Should we genuinely be concerned about the rate of progression of artificial intelligence and automation?

A:

We should be prepared to live in a world filled with AI and automation. Many jobs will become obsolete in the not so distant future. Since we know this is coming, society needs to prepare policies that will make sense in the new era.

-Rockwell (opinion)


Q:

In what areas or sectors do you see AI taking a serious foothold in first (Medical, accounting, etc) and why?

A:

Medical image processing has already taken a huge foothold and shown real promise for helping doctors treat patients. For example, a machine has matched human doctor's performance in identifying skin cancer from pictures alone!

The finance and banking sector is also prime for automation. Usually humans pick which stocks are good and bad, and buy them as they think will be best for the company. This is a complicated decision process ultimately determined by statistics gathered about each company. Instead of a human reviewing and buying these companies, now algorithms are doing it automatically.

We still don't know how this will impact our economy and jobs--only time will tell.


Q:

Machine learning is currently a hot topic right now. What do you all think will be the next big thing in AI?

A:

Adam here:

From a pure machine learning standpoint, I think unsupervised learning is going to be the next big thing in machine learning. Researchers now feed data to a machine but know both what the data is (say an image of a cat) and a label (that it is a cat)! This is called supervised learning. Much of the progress in AI is this area, and we have seen a ton of great successes in it.

How do we get machines to teach themselves? This is an art called unsupervised learning. When a baby is born, parents don't have to teach it every single thing about the world--they can learn for themselves. This is kind of tricky because how do you tell a computer what to pay attention to and what to ignore? This is not very easy, but folks in AI field are working on this. (For further reading/listening, Yann LeCunn has a great talk about this.)


Q:

As someone currently writing a Ph.D. research proposal and constantly finding myself frustrated with conflicting results in publications with nearly identical experiments, I would love to see an AI capable of parsing through hundreds of research papers, being able to comprehend the experiments and methods outlined (likely the hardest part), then compiling all the results (both visual and text-based) into a database that shows where these experiments differ, which results are the most consistently agreed upon, and which discrepancies seem to best explain the differences in results.

I can't help but feel that once the database is created a simple machine learning algorithm would be able to identify which variables best predict which results and be able to find extremely compelling effects that a human may never notice. My biggest problem is trying to make connections between a paper I read 300 pages back (or even remember the paper for that matter) and the one I am reading now.

With the hundreds of thousands of papers relevant to any particular field it would be impossible for any researcher to actually read and retain even a small fraction of the relevant research in their field. Every day I think about all the data already out there ready to be mined and analyzed and the massive discoveries that have already been made, but not realized due, to the limitations of the human brain.

Are there any breakthroughs on the horizon for an AI that can comprehend written material with such depth and be able to organize it in a way that can be analyzed by simple predictive modeling?

A:

Adam here:

That's a great idea! And pretty daunting. In the experimental/biological sphere, I have seen a service that scans the literature to find which antibodies bind to which protein. I think this is a much more focused application that seems to work pretty decently.


Q:

This is kind of tricky because how do you tell a computer what to pay attention to and what to ignore? This is not very easy, but folks in AI field are working on this.

I think you may be massively understating this. As you undoubtedly know yourself, this is called the 'frame problem', and a.i. research has been working on this problem for almost 50 years now without any progress. So it's misleading to say 'we are currently working on it' as if this is a new focus or recent development in research.

Do you have any opinions on Heideggarian A.I.?

A:

Adam here:

Thanks for your response. I guess I was referring to the specific algorithmic framework for unsupervised learning--simply finding P(X). [i.e. a complicated nonlinear probability distribution of your data] Generative models are used for this; they are useful because they give you a way to somehow probe at the underlying (latent) variables in your data and allow you to generate new examples of data.

This has previously been tackled with the Wake-Sleep algorithm, but without much success, and then Restricted Boltzmann Machines and Deep Belief Networks, but these have been really challenging to get working and applied to real world data.

Recently, models like Variational Autoencoders and Generative Adversarial Networks have broken through as some of the simplest yet most powerful generative models. These allow you to quickly and easily perform complicated tasks on unstructured data, including creating endless drawings of human sketches, generating sentences, and automatically colorizing pictures.

So yes, I agree, folks are working on this, and have been for a long time. With these new techniques, I think we are approaching a new frontier in getting machines to understand our world all on their own.

edit: typo


Q:

Do you think there are any specific laws Governments should be putting in place now, ahead of the AI advancements?

A:

The three law's of robotics suggested by Isaac Asimov.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Seriously speaking there should probably be some laws regulating the application of AI and maybe some organization that evaluates code if AI will be used in moral and ethical situations. The problem that comes to mind is the situation of a driverless vehicle continuing its course to save 1 person or deliberatley swerving to save 10 people. I'm not an expert though.

Rockwell


Q:

Hi,

I'm a second year interested in AI and Machine Learning. I was hoping that in the future opportunities related to AI saftey would open up. Do you have any tips on things I should do, or courses I should take in this general direction? Thanks!

A:

Adam here:

The folks at Google wrote a pretty interesting article about what are the safety concerns with AI in the near future. They had five main points:

1)“Avoiding negative side effects” - If we release an AI out into the wild (like a cleaning or delivery robot), how will we be sure it won’t start attacking people? How do we not let it do that in the first place?

2) “Avoiding reward hacking” - How do we ensure a machine won’t “cheat the system”? There is a nice thought experiment/story about a robot that is to make all the paperclips!

3) “Scalable oversight” - How can we ensure a robot will perform the task we want it to do without “wasting its time”? How do we tell it to prioritize?

4) “Safe exploration” - How do we teach a robot how do explore their world? If it is wandering around, how does it not fall into a puddle and short out? (Like this poor fellow)

5) “Robustness to distributional shift” - How do we get a machine to work all the time in every condition that it could ever encounter? If a floor cleaning robot has only ever seen hardwood floors, what will it do when it sees carpet?

For courses, this is a very uncharted area! I don’t think we are far enough in our understanding of machine learning that we have encountered these, but it is coming up! I would advise becoming familiar with algorithms and how these machines work!

Edit: Forgot a number, and it was bugging me.


Q:

How far are we from actual, realistic sex-bots?

A:

Depends - how good do you want the sex to be?


Q:

Recently the Facebook engineers turned off a machine-learning program that they were using to translate, which has been reported as having organically created its own language. Is this anywhere near as interesting as it seems on the surface? Why or why not?

A:

Adam here:

So I think I scoured the internet and found the original article about this. In short, I would say this is nothing to be afraid of!

A big question in machine learning is how do you get responses that look like something that humans produced or that you would see in the real world? (Say you want a chatbot that speaks English.) Also, you have a machine that can spit out examples of sentences or pictures. One way to do this would be to have a machine generate a sentence as a human would, and then you tell the machine if it did a good or bad job. It is hard to have a human tell the machine if it did a good or bad job because it takes a lot of time and is slow. Since these are learning algorithms that “teach themselves”, they need millions of examples to work correctly, so telling a machine if it did a good or bad job millions of times is out of reach for humans.

Another way to do this is to have two machines doing two different jobs. One is producing sentences (the generator), and the other machine telling it if the sentences looked like some language (the discriminator).

From what I can understand from the article, they had the machine that was spitting out language working, but the machine that said “Does this look like English or not?” was not working. Since their end goal was to have a machine that spoke English, it was definitely not working, so they shut it down. The machines that were producing language did not understand what they were saying, so I would almost classify what they were doing as garbage.

For further reading, these things are called Generative Adversarial Networks, and can do some pretty cool stuff, like dream up amazing pictures that look almost real! Original paper here.


Q:

The article has caused quite the outrage among the AI community. The click bait plays into public fear sparked by comments from Elon Musk, Hawking, etc.

Tl;dr nowhere as interesting as the article makes it out to be

A:

Dana here: Great points, Kevin.

Great question! For those who may not know, the Chinese Room argument is a famous thought experiment by philosopher John Searle. It holds that computer programs cannot "understand," regardless of how human-like they might behave.

The idea is that a person sits alone in a room, and is passed inputs written in Chinese symbols. Although they don't understand Chinese, the person follows a program for manipulating the symbols, and can produce grammatically-correct outputs in Chinese. The argument is that AI programs only use syntactic rules to manipulate symbols, but do not have any understanding of the semantics (or meaning) of those symbols.

Searle also argues that this refutes the idea of "Strong AI," which states that a computer that is able to take inputs and produce human-like outputs can be said to have a mind exactly the same as a human's.


Q:

What is it like being graduate students at Harvard? Such a prestigious school, do you feel like you have to exceed expectations with your research?

A:

My therapist told me not to discuss this issue. - Dana


Q:

As someone with a coding background but no ML background, what libraries or algorithms would you recommend looking into to become more educated in ML? Tensorflow? Looking into neural networks?

A:

Kevin here: On the cognitive science side, I'm seeing lots of people get into tensorflow as a powerful deep learning tool. For more general or instance-by-instance application of machine learning, scikit-learn gets a ton of use in scientific research. It's also been adapted/built on in my specific field, neuroimaging, in the tool nilearn.


Q:

When any of you meet someone new and explain what you do/study, do they always ask singularity related questions?

What materiel would you point a computer science student towards if they were interested in learning more about AI?

A:

Thanks for the question! We put some resources at the top of the page for more info on getting into machine learning. It is a pretty diverse field and it is changing very rapidly, so it can be hard to stay on top of it all!


Q:

Do you think advancements in AI / machine learning will follow Moore's law and exponentially improve?

If not, in your opinion, what needs to happen for there to be exponential improvements?

A:

AI actually hasn't improved that much since the 80s. There is just a lot more data available for machines to learn from. Computers are also much faster so they can learn at reasonable rates (Moore's law caught up). I think understanding the brain will help us improve AI a lot.

-Rockwell


Q:

Will it be possible for machines to feel? If so, how will we know and measure such a phenomenon?

A:

By feel I'm assuming you're referring to emotion. It'd be controversial to say that we could even measure human emotion. If you're interested in that stuff, Cynthia Breazeal at MIT does fantastic work in this area. She created Kismet, the robot that could sense and mimic human emotions (facial expressions may be more accurate).

http://www.ai.mit.edu/projects/humanoid-robotics-group/kismet/kismet.html

-Rockwell


Q:

Wow. Interesting. I wonder if AI would invent their own equivalent of emotion that didn't appear to mimic any human traits.

A:

Kevin here: I think an issue is what the purpose would be. Given our brain's "architecture," emotion (arguably) serves the function of providing feedback for evolutionarily beneficial behaviors. Scared? Run away. Feel sad? Try not to do the thing that made you feel sad again. Feel content? Maybe what you just did is good for you. (although recent human success & decadence might be pushing us into "too much of a good thing" territory...)

What function would emotion serve in an AI bot? Does it need to feel the emotion itself? Or is it sufficient for it to recognize emotion in its human interdictors and to respond appropriately in a way that maximizes its likelihood of a successful interaction?


Q:

Who's paying your tuition, car insurance, everyday food money, etc. Who's funding your life?

A:

Harvard


Q:

Will we ever be able to merge our own intelligence with machines? Can they help us out in how we think, or will they be our enemies, like everyone says?

A:

William here: This is currently happening! A couple examples: chess players of all levels make extensive use of chess machines/computers to aid their own training/preparation, AI platforms like Watson have been deployed all over the healthcare sector, predictive models in sports have also been taking off recently. Generally speaking, we make extensive use of AI techniques for prediction and simulation in all sorts of fields.


Q:

Has anyone used machine learning to create viruses? What's stopping someone from making an AI virus that runs rampant through the internet? Could we stop it if it become smart enough?

Or is that all just scary science fiction?

A:

People use machine learning to create viruses all the time. There has always been a computational arms race between viruses and antivirus software. People that work in computer security don't mess around though. They get paid big bucks to do their job and have some of the smartest people around.

Crazy people will always do crazy things. I wouldn't lose sleep over this. Security is always being beefed up and if it's breached we'll deal with it then.

Rockwell


Q:

Two questions:

1) This is probably mostly for Dana. My understanding of fMRIs is limited, but from what I understand the relationship between blood-oxygen levels and synaptic activity is not direct. In what way does our current ability in brain scanning limit our true understanding of the relationship to neuronal activity and perception? Even with infinite spatial and time resolution, how far would we be from completely decoding a state of brain activity to a particular collection of perceptions/memories/knowledge/etc?

2) Have any of you read Superintelligence by Nick Bostrom. If so I'd love to hear your general thoughts. What do you make of his warnings of a very sudden general AI take-off? Also, do you see the concept of whole brain emulation as an eventual inevitability as is implied in his book with the increases in processing power and our understanding of the human brain?

Edit: grammar

A:

Dana here: So, fMRI infers neural activity by taking advantage of the fact that oxygenated blood and deoxygenated blood have different magnetic properties. The general premise is that you use a network of specific brain regions to perform a task, and active brain regions take up oxygen from the blood. Then to get more oxygen, our bodies send more blood to those parts of the brain to overcompensate. It's this massive overcompensation that we can measure in fMRI, and use to determine which brain regions are actively working to complete the task. So this measure is indeed indirect - we're measuring blood flow yoked to neural activity, and not neural activity itself.

But although the BOLD signal is indirect, we are still able to learn a lot about the information present in BOLD activity. We can use machine learning classification techniques to look at the pattern of responses across multiple voxels (3D pixels in the fMRI image) and decode information about the presented stimuli. Recently, neuroscientists have also started using encoding models to predict neural activity from given the characteristics of a stimulus, and thus describe the information about a stimulus that is represented in the activity of specific voxels.

However, this is all operating at the level of a voxel - and a single voxel contains tens of thousands of neurons!


Q:

Interesting, thanks for the response! A few followup questions. If the encoding models operate at the voxel level, how does that limit the mapping between stimuli and neural activity? If each voxel is tens of thousands of neurons, is there fidelity that is being lost in the encoding models? And does perfect fidelity, say 1 voxel representing 1 neuron, give a substantial gain in prediction models? Do you know what mysteries that might uncover for neuroscientists or capabilities it might give to biotech? (I assume 1 voxel to 1 neuron is the ideal or is there better?)

Is there a timeline for when we might reach an ideal fMRI fidelity?

A:

We're definitely losing fidelity in our models due to large voxel sizes. We're basically smearing neural "activity" (so far as that's what we're actually recording with fMRI, which as we've discussed isn't totally true) over tens of thousands of voxels. So our models will only be accurate if the patterns of activity that we're interested in actually operate on scales larger than the voxel size (1-3 mm3 ). Based on successful prediction of diagnoses based on fMRI activity (which I wrote about previously for Science in the News), this is almost certainly true for some behaviors/disorders. But getting to single-neuron level recordings will be super helpful for predicting/classifying more complex behaviors and disorders.

For instance, this might be surprising, but neuroscientists still aren't really sure what the motor cortex activity actually represents and what the signals it sends off are (for instance, "Motor Cortex Is Required for Learning but Not for Executing a Motor Skill"). If we could record from every motor cortical neuron every millisecond during a complex motor activity with lots of sensory feedback and higher-level cognitive/emotional implications, a predictive model would discover so much about what's being represented and signaled and when.

For fMRI, we're down below 1mm resolution in high magnetic field (7T+) scanners. There's definitely reason to go even smaller - it'll be super interesting and important for the rest of the field to see how the BOLD (fMRI) response will vary across hundreds or tens or single neurons. Maybe in the next 10ish years we'll be able to get to ~0.5mm or lower, especially if we can develop some even-higher field scanners. But a problem will be in dealing with all the noise--thermal noise from the scanner, physiological noise from normal breathing and blood pulsing, participant motion.... Those are going to get even hairier at small resolutions.


Q:

Neat, it's going to be exciting hearing about the improvements in prediction models over this century. Thanks for the link to your article, I haven't read it in full yet, but improvements in diagnosing mental illness through brain imaging over standard assessments sounds incredible.

Doubling the fMRI resolution in 10 years (is that about the typical timeframe?). It sounds like filtering algorithms have to improve to keep up with resolution improvements to combat that worsening SNR. It all sounds very challenging or hairy as you say.

Edit: wording

A:

As far as fMRI goes, I think Kevin's answer (below) gets to the point. We are measuring a signal that is blurred in time and space, so at some point increased resolution doesn't help us at all - and even lowers our signal-to-noise ratio!


Q:

Do you think A.I will become Sentient and if so how long will it take? -Wayne from New Jersey

A:

Can you convince me right now that YOU are sentient?

Rockwell

(To answer your question, my personal metric for robot sentience is self deprecating humor as well as observational comedy by the same Robot in one comedy special)


Q:

When will there be AI to replace our congressmen and other (you know who!) politicians? And can we do anything to speed up the process?

A:

Politics, ethics, and the humanities and liberal arts in general will be the hardest thing for AI to replace.

Rockwell


Q:

Have you ever had sex with a robot? Would you want to?

A:

No.


Q:

studied industrial design and I'm very interested in AI and machine-learning. What would be your suggestions on how to begin to learn to utilize and get involved in the AI and machine-learning without having a background in programming/computer science/software engineering?

Learning a programming language is a start (starting to learn some python), but I don't know really know a path beyond that.

A:

Thanks for the question! We put some links at the top of the page for more information! Keep on going!


Q:

In movies they can construct a convincing video of actors that aren't even living anymore (star wars). Should we be worried about artificial intelligence being able to invalidate any sort of photo/video evidence of real crimes at some point by making perfectly accurate simulations of scenarios indistinguishable from real evidence?

A:

When I think of AI in diagnostic medicine I actually don't think of nanobots (I don't know much bout nanobots myself). I think of a machine that has learned a lot about different people (e.g. their genomes, epigenomes, age, weight, height, etc) and their health and uses that information to diagnose new patients. This is the basic idea behind personalized medicine and it's making great progress. You can imagine a world where we draw blood and based on sequencing and other diagnostics the machine will say "you have this disease and this is the solution for you". It happens a bit already.

Rockwell


Q:

What's the best route academically to get involved with AI in the future?

I'm in community college still and I'm going for an AA in computer science, but I'm also interested in getting an AA in psychology due to the concept of working within the neuroscience/AI field in the future.

A:

Adam here:

Honestly, I think having a strong mathematical background is really important for being "good" at machine learning. A friend once told me that machine learning is called "cowboy statistics": machine learning is essentially statistics, but with fancy algorithms. (I think it is called this too because the field is so new and rapidly evolving, like the Wild West.) Too much I think machine learning gets hyped up, while basic statistics can many times get you pretty far.

I would also advocate pursuing the field you are passionate about--neuroscience and psychology sound great! It doesn't do much good to model data if you don't know what it means. Most of us here have a specific problem that they find interesting and apply machine learning methods to it. (While others do work too in the pure machine learning field; that is always an option.)

tl;dr: Math and your field of interest.


Q:

Hello! Deeply fascinated with AI, thanks for doing an AMA.

What is your take on the recent development of deep learning structures developing their own languages without human input?

A:

Adam here:

Thanks for asking! I think I answered the question here. Hopefully that clears it up a bit!


Q:

How long do you think it will take to make AI like jarvis or Friday from the avengers/Spiderman movies?

A:

Adam here:

I think we are getting rather close to personal assistants we can chat with that will do our [menial] bidding. Amazon is currently holding a competition for creating a bot you can converse with. And when there is money behind something, it usually happens.

Moreover, there are already a few digital personal assistants out there you can purchase (Amazon Echo, Google Home, Siri). (They can all talk to each other too!) Soon enough these will be integrated with calendars, shopping results (where they can go fantastically wrong), and even more compilcated decision making processes.


Q:

Does the AI have the capability to choose to do, or not do something ,based on its own observation? Or unless it's coded into the AI to make those choices. Otherwise, does the AI have the freedom to choose? Or are it's choices made already based on algorithms?

A:

Here is the basic gist of how most AI "learns".

First you choose a task that you want your AI to perform. Let's say you want to create AI that judges court cases and gives a reason for it's decisions.

Second, you train your AI by giving examples of past court cases and the resulting judgements. During this process, the AI will use all the examples to develop a logic that's consistent among all the examples.

Third, the AI applies this logic to novel court cases. The scariest part about AI is that in most cases we don't really understand the logic that the computer develops; it just tends to work. The success of the AI depends heavily on how it was trained. Many times it will give a decision that is obvious and we can all agree on, but other times it may give answers that leave us scratching our heads.

There are other types of AI in which you simply program the logic and/or knowledge of a human expert (in this case a judge or many judges) into a machine and allow the machine to simply execute that logic. This type of AI isn't as popular as it used to be.

I hope this sort of answers your question.

Rockwell


Q:

Thank you for this opportunity.

So, my first question is, will AI learn how to write books? By books I mean fiction like Game of Thrones, Pride and Prejudice or Harry Potter. If yes, when do you expect it to happen? Now that AI can learn from examples, can it learn to write? And will it surpass people at it? Is writer's job in danger?

Another question I have is, why do you think we are not in danger of AI taking control like in science fiction? Do you assume we are far from achieving such level of AI sentience? Do you disagree with Paperclip thought experiment or is there some other reason for why you find it unlikely?

If we exclude religious and similar arguments, how likely could AI achieve levels of sentience and intelligence to take control and defeat humanity?

A:

Kevin here - so the other questions have been addressed at least in part elsewhere in the comments, so I'll focus on the first one.

AI will absolutely be able to write books. In fact, it's already writing poetry that is indistinguishable from human-authored poetry.

Complete novels will be tougher since they have a lot more structure, coherence, and recurring elements. But with the building blocks in place of being able to artificially create sensible-sounding prose, it won't be long before full novels can be AI-written.

But an important question for all art--music and visual art are other frontiers for AI--is how we choose to value them. Beyond the aesthetics of art (which AI can replicate), we highly value the meaning of art, which comes from morality and ethical purpose, situational experience, and other human aspects. I'm not sure I'd love "Dark Side of the Moon" so much if it wasn't motivated by the gut-wrenching loss of a friend and collaborator to his own inner demons, for example.