We are PhD students from Harvard University here to answer questions about artificial intelligence and cognition. Ask us anything!
Jul 29th 2017 by SITNHarvard • 33 Questions • 8011 Points
Thank you everyone for making this so exciting! I think we are going to call it a day here. Thanks again!!
Thanks everyone for the discussion! Keep the discussion going! We will try to respond to some more questions as they trickle in! A few resources for anyone interested.
Introduction to Programming with CodeAcademy.
More advanced Python programming language (one of the most popular coding languages).
Kaggle Competitions - Not sure where to start with data to predict? Would you like to compete with other on your machine learning chops? Kaggle is the place to go!
Machine Learning: A Probabilistic Perspective - One of the best textbooks on machine learning.
Sklearn - Really great machine learning algorithms that work right out of the box
Hello Redditors! We are Harvard PhD students studying artificial intelligence (AI) and cognition representing Science in the News (SITN), a Harvard Graduate Student organization committed to scientific outreach. SITN posts articles on their blog, hosts seminars, creates podcasts, and meet and greets with scientists and the public.
Things we are interested in:
AI in general: In what ways does artificial intelligence relate to human cognition? What are the future applications of AI in our daily lives? How will AI change how we do science? What types of things can AI predict? Will AI ever outpace human intelligence?
Graduate school and science communication: As a science outreach organization, how can we effectively engage the public in science? What is graduate school like? What is graduate school culture like and how was the road to getting here?
Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. He has published work on genetic regulation but is currently using machine learning to model animal behavior.
Dana Boebinger is a PhD candidate in the Harvard-MIT program in Speech and Hearing Bioscience and Technology. She uses fMRI to understand the neural mechanisms that underlie human perception of complex sounds, like speech and music. She is currently working with both Josh McDermott and Nancy Kanwisher in the Department of Brain and Cognitive Sciences at MIT.
Adam Riesselman is a PhD candidate in Debora Marks’ lab at Harvard Medical School. He is using machine learning to understand the effects of mutations by modeling genomes from plants, animals, and microbes from the wild.
Kevin Sitek is a PhD candidate in the Harvard Program in Speech and Hearing Bioscience and Technology working with Satra Ghosh and John Gabrieli. He’s interested in how the brain processes sounds, particularly the sound of our own voices while we're speaking. How do we use expectations about what our voice will sound like, as well as feedback of what our voice actually sounds like, to plan what to say next and how to say it?
We will be here from 1-3 pm EST to answer questions!
Proof 2: Us by the Harvard Mark I!
Should we genuinely be concerned about the rate of progression of artificial intelligence and automation?
We should be prepared to live in a world filled with AI and automation. Many jobs will become obsolete in the not so distant future. Since we know this is coming, society needs to prepare policies that will make sense in the new era.
In what areas or sectors do you see AI taking a serious foothold in first (Medical, accounting, etc) and why?
Medical image processing has already taken a huge foothold and shown real promise for helping doctors treat patients. For example, a machine has matched human doctor's performance in identifying skin cancer from pictures alone!
The finance and banking sector is also prime for automation. Usually humans pick which stocks are good and bad, and buy them as they think will be best for the company. This is a complicated decision process ultimately determined by statistics gathered about each company. Instead of a human reviewing and buying these companies, now algorithms are doing it automatically.
We still don't know how this will impact our economy and jobs--only time will tell.
Machine learning is currently a hot topic right now. What do you all think will be the next big thing in AI?
From a pure machine learning standpoint, I think unsupervised learning is going to be the next big thing in machine learning. Researchers now feed data to a machine but know both what the data is (say an image of a cat) and a label (that it is a cat)! This is called supervised learning. Much of the progress in AI is this area, and we have seen a ton of great successes in it.
How do we get machines to teach themselves? This is an art called unsupervised learning. When a baby is born, parents don't have to teach it every single thing about the world--they can learn for themselves. This is kind of tricky because how do you tell a computer what to pay attention to and what to ignore? This is not very easy, but folks in AI field are working on this. (For further reading/listening, Yann LeCunn has a great talk about this.)
This is kind of tricky because how do you tell a computer what to pay attention to and what to ignore? This is not very easy, but folks in AI field are working on this.
I think you may be massively understating this. As you undoubtedly know yourself, this is called the 'frame problem', and a.i. research has been working on this problem for almost 50 years now without any progress. So it's misleading to say 'we are currently working on it' as if this is a new focus or recent development in research.
Do you have any opinions on Heideggarian A.I.?
Thanks for your response. I guess I was referring to the specific algorithmic framework for unsupervised learning--simply finding P(X). [i.e. a complicated nonlinear probability distribution of your data] Generative models are used for this; they are useful because they give you a way to somehow probe at the underlying (latent) variables in your data and allow you to generate new examples of data.
This has previously been tackled with the Wake-Sleep algorithm, but without much success, and then Restricted Boltzmann Machines and Deep Belief Networks, but these have been really challenging to get working and applied to real world data.
Recently, models like Variational Autoencoders and Generative Adversarial Networks have broken through as some of the simplest yet most powerful generative models. These allow you to quickly and easily perform complicated tasks on unstructured data, including creating endless drawings of human sketches, generating sentences, and automatically colorizing pictures.
So yes, I agree, folks are working on this, and have been for a long time. With these new techniques, I think we are approaching a new frontier in getting machines to understand our world all on their own.
Do you think there are any specific laws Governments should be putting in place now, ahead of the AI advancements?
The three law's of robotics suggested by Isaac Asimov.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Seriously speaking there should probably be some laws regulating the application of AI and maybe some organization that evaluates code if AI will be used in moral and ethical situations. The problem that comes to mind is the situation of a driverless vehicle continuing its course to save 1 person or deliberatley swerving to save 10 people. I'm not an expert though.
I'm a second year interested in AI and Machine Learning. I was hoping that in the future opportunities related to AI saftey would open up. Do you have any tips on things I should do, or courses I should take in this general direction? Thanks!
The folks at Google wrote a pretty interesting article about what are the safety concerns with AI in the near future. They had five main points:
1)“Avoiding negative side effects” - If we release an AI out into the wild (like a cleaning or delivery robot), how will we be sure it won’t start attacking people? How do we not let it do that in the first place?
2) “Avoiding reward hacking” - How do we ensure a machine won’t “cheat the system”? There is a nice thought experiment/story about a robot that is to make all the paperclips!
3) “Scalable oversight” - How can we ensure a robot will perform the task we want it to do without “wasting its time”? How do we tell it to prioritize?
4) “Safe exploration” - How do we teach a robot how do explore their world? If it is wandering around, how does it not fall into a puddle and short out? (Like this poor fellow)
5) “Robustness to distributional shift” - How do we get a machine to work all the time in every condition that it could ever encounter? If a floor cleaning robot has only ever seen hardwood floors, what will it do when it sees carpet?
For courses, this is a very uncharted area! I don’t think we are far enough in our understanding of machine learning that we have encountered these, but it is coming up! I would advise becoming familiar with algorithms and how these machines work!
Edit: Forgot a number, and it was bugging me.
Depends - how good do you want the sex to be?
Recently the Facebook engineers turned off a machine-learning program that they were using to translate, which has been reported as having organically created its own language. Is this anywhere near as interesting as it seems on the surface? Why or why not?
So I think I scoured the internet and found the original article about this. In short, I would say this is nothing to be afraid of!
A big question in machine learning is how do you get responses that look like something that humans produced or that you would see in the real world? (Say you want a chatbot that speaks English.) Also, you have a machine that can spit out examples of sentences or pictures. One way to do this would be to have a machine generate a sentence as a human would, and then you tell the machine if it did a good or bad job. It is hard to have a human tell the machine if it did a good or bad job because it takes a lot of time and is slow. Since these are learning algorithms that “teach themselves”, they need millions of examples to work correctly, so telling a machine if it did a good or bad job millions of times is out of reach for humans.
Another way to do this is to have two machines doing two different jobs. One is producing sentences (the generator), and the other machine telling it if the sentences looked like some language (the discriminator).
From what I can understand from the article, they had the machine that was spitting out language working, but the machine that said “Does this look like English or not?” was not working. Since their end goal was to have a machine that spoke English, it was definitely not working, so they shut it down. The machines that were producing language did not understand what they were saying, so I would almost classify what they were doing as garbage.
What is it like being graduate students at Harvard? Such a prestigious school, do you feel like you have to exceed expectations with your research?
My therapist told me not to discuss this issue. - Dana
As someone with a coding background but no ML background, what libraries or algorithms would you recommend looking into to become more educated in ML? Tensorflow? Looking into neural networks?
Kevin here: On the cognitive science side, I'm seeing lots of people get into tensorflow as a powerful deep learning tool. For more general or instance-by-instance application of machine learning, scikit-learn gets a ton of use in scientific research. It's also been adapted/built on in my specific field, neuroimaging, in the tool nilearn.
When any of you meet someone new and explain what you do/study, do they always ask singularity related questions?
What materiel would you point a computer science student towards if they were interested in learning more about AI?
Thanks for the question! We put some resources at the top of the page for more info on getting into machine learning. It is a pretty diverse field and it is changing very rapidly, so it can be hard to stay on top of it all!
Will it be possible for machines to feel? If so, how will we know and measure such a phenomenon?
By feel I'm assuming you're referring to emotion. It'd be controversial to say that we could even measure human emotion. If you're interested in that stuff, Cynthia Breazeal at MIT does fantastic work in this area. She created Kismet, the robot that could sense and mimic human emotions (facial expressions may be more accurate).
Who's paying your tuition, car insurance, everyday food money, etc. Who's funding your life?
Has anyone used machine learning to create viruses? What's stopping someone from making an AI virus that runs rampant through the internet? Could we stop it if it become smart enough?
Or is that all just scary science fiction?
People use machine learning to create viruses all the time. There has always been a computational arms race between viruses and antivirus software. People that work in computer security don't mess around though. They get paid big bucks to do their job and have some of the smartest people around.
Crazy people will always do crazy things. I wouldn't lose sleep over this. Security is always being beefed up and if it's breached we'll deal with it then.
1) This is probably mostly for Dana. My understanding of fMRIs is limited, but from what I understand the relationship between blood-oxygen levels and synaptic activity is not direct. In what way does our current ability in brain scanning limit our true understanding of the relationship to neuronal activity and perception? Even with infinite spatial and time resolution, how far would we be from completely decoding a state of brain activity to a particular collection of perceptions/memories/knowledge/etc?
2) Have any of you read Superintelligence by Nick Bostrom. If so I'd love to hear your general thoughts. What do you make of his warnings of a very sudden general AI take-off? Also, do you see the concept of whole brain emulation as an eventual inevitability as is implied in his book with the increases in processing power and our understanding of the human brain?
Dana here: So, fMRI infers neural activity by taking advantage of the fact that oxygenated blood and deoxygenated blood have different magnetic properties. The general premise is that you use a network of specific brain regions to perform a task, and active brain regions take up oxygen from the blood. Then to get more oxygen, our bodies send more blood to those parts of the brain to overcompensate. It's this massive overcompensation that we can measure in fMRI, and use to determine which brain regions are actively working to complete the task. So this measure is indeed indirect - we're measuring blood flow yoked to neural activity, and not neural activity itself.
But although the BOLD signal is indirect, we are still able to learn a lot about the information present in BOLD activity. We can use machine learning classification techniques to look at the pattern of responses across multiple voxels (3D pixels in the fMRI image) and decode information about the presented stimuli. Recently, neuroscientists have also started using encoding models to predict neural activity from given the characteristics of a stimulus, and thus describe the information about a stimulus that is represented in the activity of specific voxels.
However, this is all operating at the level of a voxel - and a single voxel contains tens of thousands of neurons!
When will there be AI to replace our congressmen and other (you know who!) politicians? And can we do anything to speed up the process?
Politics, ethics, and the humanities and liberal arts in general will be the hardest thing for AI to replace.
studied industrial design and I'm very interested in AI and machine-learning. What would be your suggestions on how to begin to learn to utilize and get involved in the AI and machine-learning without having a background in programming/computer science/software engineering?
Learning a programming language is a start (starting to learn some python), but I don't know really know a path beyond that.
Thanks for the question! We put some links at the top of the page for more information! Keep on going!
Hello! Deeply fascinated with AI, thanks for doing an AMA.
What is your take on the recent development of deep learning structures developing their own languages without human input?
Thanks for asking! I think I answered the question here. Hopefully that clears it up a bit!
Does the AI have the capability to choose to do, or not do something ,based on its own observation? Or unless it's coded into the AI to make those choices. Otherwise, does the AI have the freedom to choose? Or are it's choices made already based on algorithms?
Here is the basic gist of how most AI "learns".
First you choose a task that you want your AI to perform. Let's say you want to create AI that judges court cases and gives a reason for it's decisions.
Second, you train your AI by giving examples of past court cases and the resulting judgements. During this process, the AI will use all the examples to develop a logic that's consistent among all the examples.
Third, the AI applies this logic to novel court cases. The scariest part about AI is that in most cases we don't really understand the logic that the computer develops; it just tends to work. The success of the AI depends heavily on how it was trained. Many times it will give a decision that is obvious and we can all agree on, but other times it may give answers that leave us scratching our heads.
There are other types of AI in which you simply program the logic and/or knowledge of a human expert (in this case a judge or many judges) into a machine and allow the machine to simply execute that logic. This type of AI isn't as popular as it used to be.
I hope this sort of answers your question.
Thank you for this opportunity.
So, my first question is, will AI learn how to write books? By books I mean fiction like Game of Thrones, Pride and Prejudice or Harry Potter. If yes, when do you expect it to happen? Now that AI can learn from examples, can it learn to write? And will it surpass people at it? Is writer's job in danger?
Another question I have is, why do you think we are not in danger of AI taking control like in science fiction? Do you assume we are far from achieving such level of AI sentience? Do you disagree with Paperclip thought experiment or is there some other reason for why you find it unlikely?
If we exclude religious and similar arguments, how likely could AI achieve levels of sentience and intelligence to take control and defeat humanity?
Kevin here - so the other questions have been addressed at least in part elsewhere in the comments, so I'll focus on the first one.
AI will absolutely be able to write books. In fact, it's already writing poetry that is indistinguishable from human-authored poetry.
Complete novels will be tougher since they have a lot more structure, coherence, and recurring elements. But with the building blocks in place of being able to artificially create sensible-sounding prose, it won't be long before full novels can be AI-written.
But an important question for all art--music and visual art are other frontiers for AI--is how we choose to value them. Beyond the aesthetics of art (which AI can replicate), we highly value the meaning of art, which comes from morality and ethical purpose, situational experience, and other human aspects. I'm not sure I'd love "Dark Side of the Moon" so much if it wasn't motivated by the gut-wrenching loss of a friend and collaborator to his own inner demons, for example.