actorartathleteauthorbizcrimecrosspostcustomerservicedirectoredufoodgaminghealthjournalistmedicalmilmodpostmunimusicnewsworthynonprofitotherphilpolretailscispecialisedspecializedtechtourismtravelunique

ScienceIamA Max Tegmark – AMA!

Aug 30th 2017 by MaxTegmark • 9 Questions • 632 Points

Hello fellow Redditors!

I'm the CEO of Toybox. It's the printer that lets kids print their own toys at home. About a year and a half ago, I posted to Reddit with a small site and this idea, and the post blew up.

Since then I quit my full-time job with three others, and we set out on an amazing (and brutal) journey to make this a reality.

We've spent the last year developing an extremely easy to use interface, tons of content, a reliable printer, and negotiating with several suppliers to make it reasonably priced.

I'm finally happy to say that now we're live on Indiegogo about to make this a reality.

So here I am. Let's get to it, ask me almost anything!

PROOF

.

Q:

If you were allowed to say crazy things without anybody judging you, what would you say consciousness "actually is?" Do you have any far-out ideas about consciousness that you haven't subjected to any scientific scrutiny yet?

A:

Will the printer be on sale after the indiegogo date? or is it exclusive to crowd funding ( I see the 37% discount!).


Q:

I think that consciousness is the way information feels when being processed in certain complex ways. To me, the exciting remaining challenge is to clarify what precisely those "certain complex ways" are, so that we can predict which entities are conscious. :-)

A:

Yes, the printer will be on sale after the Indiegogo date, but it will ship after the Indiegogo batch and without the steep discount.


Q:

Dear Professor Tegmark, I'm a young neuroscientist developing one of the local Effective Altruism chapters. How can I transfer from my field to the AI safety and x-risk research? I would love to directly contribute my intellectual work to the focus areas which seem to be humanity's ultimate challenges, but it's extremely difficult for an "outsider" to reach the decisive circles and secure a high-impact position in the existential risk network.

As a side note, I have recently read "Our Mathematical Universe" and your deep considerations on the nature of reality significantly helped me in getting through difficult times by appreciating the uniqueness of life. Thank you!

A:

What if my kids want to play with tv and video game character toys?


Q:

Thanks for your encouraging words! Please get in contact with 80,000 Hours (https://80000hours.org), who give awesome advice for jpw to switch into such a career. :-)

A:

Good question. We currently don't offer copyrighted content in our own catalog. However, as we grow we plan on offering negotiating deals with companies like Disney to get those wonderful characters in our platform.

For now, to print copyrighted content, you would need to download the toys from a 3rd party and then load it on to the printer.


Q:

Why are we living in a three-dimensional universe? Is there something special about having three spatial dimensions?

A:

What's the biggest difference from a normal 3D printer? Why would I buy this one instead of the M3D?


Q:

As described in https://arxiv.org/pdf/gr-qc/9702052.pdf and in my 1st book (http://mathematicaluniverse.org), you can't have stable atoms or solar systems if there are more than 3 spatial dimensions, you don't have gravitational attraction in less than 3, and you can't predict anything if there's more than 1 time dimension, making it pointless to have a brain. So if there's a multiverse where different parts have different dimensionality (as in many models with stringtheory + inflation, say), then you'd probably only have observers in parts with 3 space dimensions and one 1 time dimension - and here we are! :-)

A:

I think the biggest thing we offer is simplicity and fun. We built Toybox it so that kids can start using Toybox without any training. It works from a simple phone app. It's super intuitive!

I think the biggest problem with M3D was that their printers weren't reliable. I had one at it shipped with a broken nozzle and they didn't have any safeguards on their printer so it's very easy for a novice user to spend 5 hours printing something only to find out that the prints won't work on their printer.


Q:

What books/papers have influenced your thinking the most and/or what books/papers do you think you've learned the most from?

A:

what do you know now that had you known before, would have saved you the most headache and unnecessary trouble? which services exist now that didn't before that would have also been helpful?


Q:

The book that's blown me away most recently is "Sapiens". What got me into physics in the first place was "Surely you're joking, Mr. Feynman" and "The Feynman Lectures on Physics, Part 1". :-)

A:

Probably how complicated manufacturing is and distribution is! I think we've spent a good part of the last year figuring that out.


Q:

Dear professor Max Tegmark, I absolutely loved your first book, it was a very fun and interesting read. You are one of the reasons I study physics and I love it. Do you have any recommended readings? Also, I wonder what your favourite paradoxes are. Thank you very much for doing this AMA and I look forward to reading your book!

A:

are there companies out there who specialize in shepherding people through the process that would have helped, or do you think this is something everyone has to figure out on their own?


Q:

Thanks for your encouraging words - it makes my day to hear that it contributed to your decision to study physics! I put a long list of my favorite physics-related books at the end of "Our Mathematical Universe". I love all paradoxes, since I feel that it's precisely where our understanding breaks down that we're most likely to find helpful clues that help science progress.

A:

ple through the process that would have helped, or do you think this is something everyone has to figure out on their own

There absolutely are! But like everything in business they cost money. So it's all about how you allocate your cash. We wanted to keep our prices low so we formed a strategic partnership with an existing manufacturer and did a lot of the grunt work ourselves :).


Q:

Hi professor Max!

I launch two questions to you:

1)From what I understood today we have very specific AI that can perform much better than humans in very specific activities(like chess, driving cars, games, etc...). And it's easy to see why they do better(win/not win, more accurate/less accurate, etc...)

Then we have the next step of development, the general type of artificial intelligence(that I think it is the one that many people are afraid of). How can we know that this will perform better than us as the specific one? I'm especially thinking about the definition of better. If this will be some sort of human how can we tell that it/he/she/*** is better than us? Between humans is very difficult to define who is better than who....

2)I remember some years ago when an Italian physics professor, G. Parisi, said that we are becoming more aware of the fact that we cannot have an intelligence without a body. Why no one is talking about a body when introducing AI? (in case if you can give me some resource pls because I'm really ignorant about this)

A:

It does produce fumes, and inhaling particulates isn't the same as having it in contact with your skin.


Q:

1) Intelligence is the ability to accomplish complex goals, so it can’t be quantified by a single number such as an IQ, since different organisms and machines are good at different things. To see this, imagine how you'd react if someone made the absurd claim that the ability to accomplish Olympic-level athletic feats could be quantified by a single number called the "athletic quotient", or "AQ" for short, so that the Olympian with the highest AQ would win the gold medals in all the sports. 2) AI can trivially have a "body" in the form of sensors, actuators, etc – or simply being connected to the internet and enough money to buy the real-world goods and services it needs. I open my new book (http://space.mit.edu/home/tegmark) with a detailed thought experiment to explore this point.

A:

Yes, all the tools the come with Toybox come with the guarantee that everything you design on the tool will be able to print. However, if you're more advanced, you can design your own as well.

I'd recommend Sketchup or Tinkercad for beginners.


Q:

Mr. Tegmark, First of all, it's wonderful to have a chance to speak with you! I absolutely loved your "Our Mathematical Universe" book. I'm a Mechanical Engineer student who loves physics.

I'm very conviced about alien life in space. There is billions of planets that could contain life, and the statistics says it must be life somewhere other than Earth. But intelligent life? How we became intelligent in first place? Why evolution needed to improve human mind more than any other spieces? Most of animals can communicate with voices, it's not special for only humankind but, we're the only ones that note it down and transfer information to our grandkids so much faster than genetic methods of learning in evolution. We have aesthetic values, we love, we think our place in cosmos, we think the main purpose of life, we have a huge passion to learn more things. Maybe we are the exeption. Or maybe the other intelligent species have so much more different methods of live, communicate and store information than us and we cannot observe them yet. Do you think humankind is a major player in cosmos, or we're just as important as bacterias living right now on my keyboard?

A:

what were some other names you considered?


Q:

Although human life is having an almost imperceptibly small impact on our universe now, I believe that we can have an enormous impact in the future. I'm in an uber in Boston right now reflecting on how life went from being a side show here to totally dominating the landscape. As I explain in chapter 6 of Life 3.0 (http://space.mit.edu/home/tegmark/ai.html), I think that life can similarly transform our cosmos once empowered by AI.

A:

Because it's a bottomless Toybox that keeps producing toys :)


Q:

Hello, Professor Tegmark. You argue that it is the pattern in which elementary particles are placed that differentiates a human brain from other lumps of matter. Given this, could it be that in order to achieve consciousness, or sufficient informational processing ability, elementary particles must be organized in such a way as to resemble the wetware that is a human brain?

A:

Are the designs of the toys in your catalogue designed by your company or can random people upload designs? Also, is it possible to make something compatible with lego?


Q:

My guess is no: that matters isn't the low-level implementation of the information processing, but the high-level structure of the information processing itself. But I try to keep an open mind about this, since Giulio Tononi argues that it might be the the other way around.

A:

Our catalog leverages the best of open sourced designs and peoples submissions. We only put things on the catalog that print very reliably on Toybox. So that way you are guaranteed everything you see will print.

Absolutely. We spent a ton of time designing these Lego Compatible Bricks that print extremely well on Toybox printers.


Q:

Is having consciousness relevant to our survival? Does it somehow give us an advantage over an AI life form without it?

A:

Can kids make their own toy designs? And how much is additional printer food?


Q:

With my definition of consciousness (="subjective experience"), my answer is "no": what affects your survival is only what you do (which depends on your intelligence), not how you subjectively feel. But it's quite possible that the most evolutionarily efficient way to implement intelligence is by an computational architecture that is conscious as a side-effect.

A:

We do make it so kids can make their own designs. We're the only printer to offer such tools and we make it super easy.

Printerfood is cheap. It's $9 a roll and that can print hundreds of small trinkets.


Q:

Hi Dr. Tegmark - What do you make of the contention that a theory that predicts multiple universes effectively loses its explanatory power since it: 1. predicts nothing, since everything can happen; 2. relies on unseen and impossible-to-ever-detect alternative universes. If a theory emerged that could explain our universe without relying on unseen additional universes, wouldn't that be more ideal?

A:

It's crucial not to conflate theories with their predictions. Parallel universes are not a theory, but the prediction of certain theories, which are in turn scientific and testable because they make other predictions. For example, the theory of cosmological inflation (whose predictions agree well with recent measurements from the Planck satellite etc, helping explain why it's now our most popular theory for what put the bang in to our Big Bang) predicts that space is larger than the part we can see (the Level I multiverse). Neither my book nor other recent books discussing the Level I multiverse (by Vilenkin, Susskind, Greene, etc.) claim that the Level I multiverse exists. Rather, they claim that inflation implies a Level I multiverse, so that your take inflation seriously, then you're logically forced to take this multiverse seriously too. The logic is analogous for the other multiverse levels. Is inflation correct? We don't know yet, but this is a scientific question that upcoming experiments will shed more light on - and these experiments have nothing to do with personal beliefs or arm-waving.


Q:

Apologies for the long question. If you'd touch on what you have time for that would be awesome! Thanks for having this discussion. Your work is fascinating!

In your recent interview with Sam Harris on his Waking Up podcast you discuss two schools of thought that should be taken into consideration when trying to morally assess the future tolerance for a superhuman artificial general intelligence. The first is to keep it boxed and restricted rather than the second school of thought that has an allowance for an autonomously functioning robot that is free to absorb and interpret information as it pleases. You also discuss how the second school of thought depicts a level of immorality of restricting any sort of intelligence to not be able to freely interpret information. This second school of thought suggests that the particular superhuman artificial general intelligence will have its own, individualistic, subjective experience. What is the evidence that an AI will have a subjective experience, a consciousness, and be able to experience a version of emotion? I find it much more likely that an AI, if we programmed it to be human-like, may have the illusion of a subjective experience. I find it difficult to imagine a human-like superhuman artificial general intelligence to be able to experience true pain, hate, love, happiness, anger, etc. as they are not biological evolutionary life forms. I have no doubt that these entities will have a profound impact on our proceedings as a species but why should we take into consideration their individual subjective experience rather than using them as an altruistic lever? And my last question is, if we were going to restrict this machine, why would it have any sort of incentive to break out of its restriction? I agree that with humans, complex biological organisms, prohibition, regardless of its extensiveness, does not work. I think that the reason for this has a lot more to do with our evolutionary desires to have sex, alter our states of consciousness, etc. But for a non-biological organism why would this AI have any incentive at all to break out of its confinement if it has no true biological motivation to do so?

A:

First of all, we've traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. But from my perspective as a physicist, intelligence and consciousness is simply a certain kind of information processing performed by elementary particles moving around, and there's no law of physics that consciousness requires cells or carbon atoms. In face, I dislike the "carbon chauvinism" suggesting otherwise. Second, as I explain in detail in Chapter 7 of "Life 3.0" (http://space.mit.edu/home/tegmark), almost any goal we give to a smart AI robot (say buying us groceries) will lead it to develop subgoals that include self-preservation (since it realizes that it lets itself be attacked & destroyed on the way to the supermarket, it won't accomplish its shopping goal). So machine goals will be the rule, not the exception, regardless of how it subjectively feels to be the AI.


Q:

Hi Max, I am just about to start university. I would like to go into Artificial Intelligence and am wondering what advice you would give to people my age?

A:

Please get in contact with 80,000 Hours (https://80000hours.org), who give awesome career advice for idealistic people.