So much of what has been labelled
as AI, when projecting into the future, has had the sticker torn off
once the accomplishment has been achieved.
There's an interesting article (http://bit.ly/MBmLSA) from
American Scientist that covers some of the early days with Minsky and
McCarthy, through to the work on checkers with Schaeffer, language
translation and knowledge systems such as Watson and the online AI
courses that are now available for anyone to sign up to.
The main
point of the article is that much of the successes of AI are from the
point of "shallow understanding" rather than "deep understanding."
Language translation employs word and phrase lookups, checkers has
endgame databases and move lookaheads.
Each time we think of problems
that would require "true thinking" we later seem to move the goalposts
and decide that the solution didn't, in fact, truly require any deep
insights to be made by the computer.
And yet progress marches
ever on. More and more AI systems are being employed in ever more
diverse fields, whether it's refining city management or controlling
satellites or aiding diagnosis - or even at the level of knowing which
ads you'd most like, books you'd buy or which people are most
influential in certain circles for certain products.
The question will be, at what point does the illusion go away?
At
what point does it become impossible to tell ourselves it's not really
smart, it's just applying some basic rules, iterated over a lot of
flops?
Because given the non-linear rates of advancement, by
the time it starts getting notice, what's on the cutting edge will be
what's otherwise around the corner, and what's around the corner from that, well ...
I think there will be some interesting times ahead where we end up with systems that cross over the following divides:
- of teaching itself
- of sentience
- of self-awareness
And
even in those areas, it's not a binary situation, and I think we'll see
debate over systems that have limited learning vs a more flexible and
unlimited learning system. And the terminology will be hotly debated.
What are the bounds? What is the environment? What is being learned?
I
suspect in some ways that debate will continue all the way up to the
point where machines start clearly surpassing humans and have ticked off
all three boxes above: being auto-didactic, sentient and self-aware.
And then there's the complex issue of creativity.
And
because I'd argue that none of those are entirely binary, and that progress
is more continuous, I think the discussion will carry on for some time,
and merely start asymptoting to zero without actually having a moment
where we just stop. Though it's likely that there will be drop-off
points where we go, see, this particular machine/accomplishment is strong evidence...
In a way, that's decades off, depending on how you like to put your marks on curves. But one way or another, we're in for some upheaval.
To quote William Gibson: the future is already here, it's just not very evenly distributed.
Sunday, August 12, 2012
Monday, August 6, 2012
Technologies that can't ever exist? Or can they?
In this interesting article from io9, the argument is made that some of sci-fi's futuristic tech just "can't ever exist in reality."
Which seems a fairly far fetched claim. If they were stating they were things that you wouldn't see in the near term, I could probably relax and enjoy it a little more.
In some ways, they do actually acknowledge this. They admit occasionally that things are possible ... so I'm going with the belief that they've just overemphasised the impossibility of some of the items for dramatic effect. And for generating discussion - which they've achieved. Certainly, they're mostly pretty far out things they're talking about.
Fortunately, I have a copy of Michio Kaku's excellent Physics of the Impossible with me, so I'll drop a line and reference to some of the items in io9's list. Most of their items are addressed in his book.
Kaku breaks things into three sections,
- the first for impossibilities right now, but available in the near(-ish) term, say this century or so. No known issues with physics as we know them.
- the second for things which aren't disallowed in physics, but we'd probably need to develop a new understanding of the laws of physics in order to perform them, or they'd take many centuries in order to accomplish.
- the third is for things that the laws of physics, as we currently know them, outright forbid.
And now for the io9 list:
1. Lightsabers
The argument is that you can't contain the beam, and that "a power source that powerful doesn't, and can't, exist." Convincing argument that.
Kaku addresses this in 1.3, says there are no theoretical issues and suggests a timeline of perhaps by the end of the century, making it Class 1.
2. Human teleportation
They admit it's done on the small scale, but argue that it cannot scale, that every molecule would have to be in the exact correct location, and since it's a destructive copy, then it's a suicide machine.
Well. Scaling is obviously a seriously non-trivial issue, but I failed to note the point at which it just stops working. Further, our bodies aren't some fixed works of art where every molecule is forever fixed - we're a walking Ship of Theseus. And if you've just made a copy, then if the first is destroyed on the copy process, I don't see how that's particularly relevant, let alone a reason why it can't be done.
Kaku actually vacillates on this a little in 1.4 - despite teleportation being in section 1, he says viruses and cells are Class 1 impossibilities and might be available within the century, human teleportation is Class 2 and would take several centuries 'if it is possible at all' ... 'although it is allowed by the laws of physics.'
See item 9 for a related issue.
3. Time machine
They start off by noting that it is actually possible, then quote Kaku himself to say that it would require too much energy. And that it sets up paradoxes that cannot be resolved.
Well, time travel is a difficult one to be sure, and yet ... they're rather vague. They don't actually say that they're only considering travelling back in time, though it is implied. And yet the fact that relativity says that time is relative and we actively use this fact in many technologies today seems to have bypassed them. The grandfather paradox is certainly a very interesting possibility, though others exist.
Kaku has this listed out in 2.2, saying that it might take centuries in order for this to become possible, and would require a ToE. So other than coming to the opposite conclusion and saying it's impossible, the article pretty much echoes Kaku's points.
4. Faster than light travel
There are a lot of good points in this one, from the causation violations, to the energy requirements, to the results of using an Alcubierre drive. I don't know that there's any one specific thing that really works against fittling, but there certainly are a lot of obstacles. Much as a lack of supra-light travel really puts a damper on a lot of neat space travel, I have to admit there are a lot of problems.
And yes, by getting arbitrarily close to light you can get anywhere in near zero time, but it's not the same thing.
Kaku discusses FTL in 2.1 and puts it in the realm of Class 3 civilisations, saying it might be a millennium away, due to the physics required and energy expenditure. Particle accelerators 10 LY long? I wonder whose budget sheet that gets written on.
5. Generation ships
The fundamental problem is that apparently the resource and materials would be too great for any ship, though they see no problem with suspended animation.
Resource and materials is a challenge, it's not something that's impossible to overcome. Scale and effort is one thing (you're heading to the stars!) but I don't see why it couldn't be done. Of course, as with many of these items, there are other technologies that are likely to come along and mean that these are redundant, but I certainly don't see why a Generation Ship has a "fundamental problem."
Of amusement, this section says that because of the issues of Generation Ships, suspended animation would be a "much more reasonable solution." Item 8, why are you placed so far from this one?
Kaku talks about starships in 1.9, placing them as just Class 1, though he only touches on the issues of generation ships for a sentence or two before saying it would be easier to go with suspended animation, or nanoships.
6. Gravitational Shielding
The section states explicitly that it's impossible due to violation of physics.
Kaku talks about force fields in 1.1, but I can't find any discussion of anti-gravity, so I'm going to have to leave this one.
7. Personal force fields
Well, this one follows on from item 6, but says that whilst ships could have force fields, humans couldn't use one due to being fried by it. And it would have to use electromagnetism, since the other forces "are either way too weak or are constrained across short distances." Given those two objections, I'm not sure how he's comfortable with force fields for ships, though I suppose you could have other shielding to counteract the force field shielding. Not sure how that helps with the second point.
I suppose being on a ship could be easier since it's out in space surrounded by vacuum, whereas humans have adjacent material (the ground) that would cause issues.
But I digress.
Back to Kaku's section 1.1, where he actually uses a slew of technologies for the personal force field. Laser curtains, plasma windows, carbon nanotubes, photochromatics - each one designed to stop various incoming assaults. Within the century he thinks we'd have something that roughly fits the concept of the science fiction personal force field.
8. Reanimation from cryonic suspension
The objection here is that freezing cells damages cells, and you cannot then thaw them out without having killed them. This one hedges its bets a few times, saying things like it's not the theory, it's just the current method we use. And that other methods of reanimation might be possible, just not from freezing people.
This really just sounds like, current techniques are not good enough for a specific type of reanimation.
Not really sure why it needs to be this specific, except to try and make it less possible.
Kaku doesn't address this issue in Physics of the Impossible.
9. Continuity of consciousness after uploading
This one starts splitting hairs on the difference between uploading a mind ("distinct possibility") and the continuity of transferred consciousness ("open question").
Okay, splitting hairs is a little harsh. The continuity is a real problem. And it's a seriously intriguing thought process, that of the non-negligible transfer period of a mind ... the time of a partial mind.
And this one also raises the issue of teleportation (item 2), which they acknowledge. Though here, it seems to be that they are admitting some of the same issues arise, that of multiple copies insisting that they're all the genuine thing (personally I don't see why they aren't), but that doesn't seem to bother them with the possibility of it all here, it just irritates them with regard to the continuity of consciousness. In other words, now it's almost like it's "sure, you can copy people, but which one has the real mind?" and formerly it seemed to be "how could you copy people, you'd be destroying the original." And if you can copy the mind, such that you can run it digitally, then there's certainly no impediment to copying the body, should you so desire. Or even constructing something else you prefer. Like a fixed up body. Or a non-human body.
I'm surprised there was no mention of Kurzweil or Drexler here.
Kaku doesn't give this one much of a mention, but he touches on it in section 1.7 of Physics of the Impossible, which is about robots and AI. Towards the end he makes brief mention of the Singularity, and the possibility of either merging organics with silicon, or uploading minds. He refers to Moravec stating that it might be in a "distant future", which Kaku says is "not beyond the realm of possibility." Which is of course much more pessimistic than most of the transhumanists who are looking at a timeframe somewhere around the middle of the century for mind uploading and AI explosions.
10. Infinite data processing
The argument is that you can't live forever; you therefore can't process forever, or think forever. Eventually the universe either dies a heat-death or you hit a Big Crunch.
Well, you don't really need a lot of material for this one, do you? I mean, it's going to end one way or another, so processing and thinking has to come to an end, right?
Maybe.
Or maybe not.
Kaku doesn't address the matter in his book.
And I'm sure I don't have to remind people of Asimov's most excellent short story, The Last Question.
However, there's a very intriguing book by physicist Paul Davies called the Last Three Minutes. In the book, Davies discusses the two endpoints and whether or not an infinite number of thoughts could occur. For both scenarios he manages to come to an affirmative answer. If you want details, I'm afraid I'll have to refer you to the book though...
Well, that's a wrap.
It was fun digging out the old Kaku book.
Go read the article if you haven't already, since they probably explain their position better than I have.
Subscribe to:
Posts (Atom)