Sunday, October 21, 2012
Flash Fiction: Gate
Dare I enter?
Scraped and pockmarked from a thousand thousand years bared to space, the outer walls scream their age into the inky blackness that enshrouds them. Stretched and smeared across this drifting rock, they bleed away in gunmetal fractals from the parted lips of the gate to recede into the enclosing darkness.
A darkness thicker and deeper than space itself.
A darkness that is broken not by starlight or gaslight, but by a sullen amber light dribbling from a makeshift craft that's dragged itself across the void. Strained and worn, but with tenacity woven through its core, the craft had traveled far beyond the outer tendrils of civilisation with its singular cargo. And now it lies above me, writhing awkwardly in slow-motion as its omnispective gaze is defeated by the enigma of the gate.
And so I stand here, now, facing a structure no human hand has touched, no human face has witnessed. And as a crepuscular vapour releases its millennial-long hold on the rock underfoot, awoken by the unexpected bombardment of dull streams of photons, I ponder anew.
Dare I enter?
Not of my own free will.
Not again.
image source: Christopher Bowler.
Wednesday, September 5, 2012
On the failure of sci-fi
This is mostly a rant aimed at dramatic presentations. TV and film are just gross offenders; books less so, if you're looking in the right places.
As much as I like a lot of sci-fi movies, when it comes to portraying the future, they're just lamentably wide of the mark. Maybe that's fine if you're not looking for something prophetic. If all you're after is taking one tech aspect of today, advancing it in a bubble void for umpteen years and then tacking it onto present day society, then a lot of movies fit the bill.
Hell, if you argue that what you want is just a way of presenting today's issues in a different context, then you're in luck. There's plenty of that. But a way of envisaging what tomorrow will be like? Not so much.
I'm a fan of Aliens. Hell, who isn't, right? And you've got some tasty tech, some fine roughed-up starships, some interesting biological critters from another world. But realism? No. That's an 80s movie with 80s people (if that) transported to a distant galaxy.
The same applies to Prometheus, from this year. Take people from today, sprinkle a little tech that's isolated from everything else and juice it up and sprinkle on top.
And so forth for the majority of shows that try to look more than a decade or so into the future, and for three main reasons.
First, movies and TV shows tend to only look at isolated bits of tech. Hey, let's add in some cool advanced guns. Or spaceships. What they skip is the integration of multiple lines of tech. Everything from comms to health to data and processing. But especially the convergence of the big three: AI, genetics and nanotech. You plot out the developments in these areas and where they are going and you don't just have a future consisting of Joe Blow from today holding Cool Gadget #3 in his hands. Joe Blow is radically different. His interactions are radically different.
Two, the timelines for most of their tech is woefully conservative, for the fact that people find it hard to grasp that progress is not linear, but exponential. Plotting out cool gadgetry a hundred years in the future is almost laughable when you start applying curves to progress.
And finally, related to the second, there's almost this sense that people live in a fairly technologically stable universe. They'll show you a person from the future and things for this character are pretty much the same for them as when they were younger. No, they won't be. It's not like it's the 1800s and the world of the parents is the same as the world of the children. The world of people growing up in the 2020s is going to be radically different from the world of those growing up in the 2050s. Progress is not stopping, and it's not slowing down.
If you've got a movie that's a hundred years in the future and there are lumps of meat walking around surrounded by dumb matter, then without a lot of explaining I'm just going to be enjoying it as a fun fantasy piece. Because that's not even trying to be a realistic window to our future.
Sunday, August 12, 2012
On AI's past, on AI's future
So much of what has been labelled
as AI, when projecting into the future, has had the sticker torn off
once the accomplishment has been achieved.
There's an interesting article (http://bit.ly/MBmLSA) from American Scientist that covers some of the early days with Minsky and McCarthy, through to the work on checkers with Schaeffer, language translation and knowledge systems such as Watson and the online AI courses that are now available for anyone to sign up to.
The main point of the article is that much of the successes of AI are from the point of "shallow understanding" rather than "deep understanding." Language translation employs word and phrase lookups, checkers has endgame databases and move lookaheads.
Each time we think of problems that would require "true thinking" we later seem to move the goalposts and decide that the solution didn't, in fact, truly require any deep insights to be made by the computer.
And yet progress marches ever on. More and more AI systems are being employed in ever more diverse fields, whether it's refining city management or controlling satellites or aiding diagnosis - or even at the level of knowing which ads you'd most like, books you'd buy or which people are most influential in certain circles for certain products.
The question will be, at what point does the illusion go away?
At what point does it become impossible to tell ourselves it's not really smart, it's just applying some basic rules, iterated over a lot of flops?
Because given the non-linear rates of advancement, by the time it starts getting notice, what's on the cutting edge will be what's otherwise around the corner, and what's around the corner from that, well ...
I think there will be some interesting times ahead where we end up with systems that cross over the following divides:
- of teaching itself
- of sentience
- of self-awareness
And even in those areas, it's not a binary situation, and I think we'll see debate over systems that have limited learning vs a more flexible and unlimited learning system. And the terminology will be hotly debated. What are the bounds? What is the environment? What is being learned?
I suspect in some ways that debate will continue all the way up to the point where machines start clearly surpassing humans and have ticked off all three boxes above: being auto-didactic, sentient and self-aware.
And then there's the complex issue of creativity.
And because I'd argue that none of those are entirely binary, and that progress is more continuous, I think the discussion will carry on for some time, and merely start asymptoting to zero without actually having a moment where we just stop. Though it's likely that there will be drop-off points where we go, see, this particular machine/accomplishment is strong evidence...
In a way, that's decades off, depending on how you like to put your marks on curves. But one way or another, we're in for some upheaval.
To quote William Gibson: the future is already here, it's just not very evenly distributed.
There's an interesting article (http://bit.ly/MBmLSA) from American Scientist that covers some of the early days with Minsky and McCarthy, through to the work on checkers with Schaeffer, language translation and knowledge systems such as Watson and the online AI courses that are now available for anyone to sign up to.
The main point of the article is that much of the successes of AI are from the point of "shallow understanding" rather than "deep understanding." Language translation employs word and phrase lookups, checkers has endgame databases and move lookaheads.
Each time we think of problems that would require "true thinking" we later seem to move the goalposts and decide that the solution didn't, in fact, truly require any deep insights to be made by the computer.
And yet progress marches ever on. More and more AI systems are being employed in ever more diverse fields, whether it's refining city management or controlling satellites or aiding diagnosis - or even at the level of knowing which ads you'd most like, books you'd buy or which people are most influential in certain circles for certain products.
The question will be, at what point does the illusion go away?
At what point does it become impossible to tell ourselves it's not really smart, it's just applying some basic rules, iterated over a lot of flops?
Because given the non-linear rates of advancement, by the time it starts getting notice, what's on the cutting edge will be what's otherwise around the corner, and what's around the corner from that, well ...
I think there will be some interesting times ahead where we end up with systems that cross over the following divides:
- of teaching itself
- of sentience
- of self-awareness
And even in those areas, it's not a binary situation, and I think we'll see debate over systems that have limited learning vs a more flexible and unlimited learning system. And the terminology will be hotly debated. What are the bounds? What is the environment? What is being learned?
I suspect in some ways that debate will continue all the way up to the point where machines start clearly surpassing humans and have ticked off all three boxes above: being auto-didactic, sentient and self-aware.
And then there's the complex issue of creativity.
And because I'd argue that none of those are entirely binary, and that progress is more continuous, I think the discussion will carry on for some time, and merely start asymptoting to zero without actually having a moment where we just stop. Though it's likely that there will be drop-off points where we go, see, this particular machine/accomplishment is strong evidence...
In a way, that's decades off, depending on how you like to put your marks on curves. But one way or another, we're in for some upheaval.
To quote William Gibson: the future is already here, it's just not very evenly distributed.
Monday, August 6, 2012
Technologies that can't ever exist? Or can they?
In this interesting article from io9, the argument is made that some of sci-fi's futuristic tech just "can't ever exist in reality."
Which seems a fairly far fetched claim. If they were stating they were things that you wouldn't see in the near term, I could probably relax and enjoy it a little more.
In some ways, they do actually acknowledge this. They admit occasionally that things are possible ... so I'm going with the belief that they've just overemphasised the impossibility of some of the items for dramatic effect. And for generating discussion - which they've achieved. Certainly, they're mostly pretty far out things they're talking about.
Fortunately, I have a copy of Michio Kaku's excellent Physics of the Impossible with me, so I'll drop a line and reference to some of the items in io9's list. Most of their items are addressed in his book.
Kaku breaks things into three sections,
- the first for impossibilities right now, but available in the near(-ish) term, say this century or so. No known issues with physics as we know them.
- the second for things which aren't disallowed in physics, but we'd probably need to develop a new understanding of the laws of physics in order to perform them, or they'd take many centuries in order to accomplish.
- the third is for things that the laws of physics, as we currently know them, outright forbid.
And now for the io9 list:
1. Lightsabers
The argument is that you can't contain the beam, and that "a power source that powerful doesn't, and can't, exist." Convincing argument that.
Kaku addresses this in 1.3, says there are no theoretical issues and suggests a timeline of perhaps by the end of the century, making it Class 1.
2. Human teleportation
They admit it's done on the small scale, but argue that it cannot scale, that every molecule would have to be in the exact correct location, and since it's a destructive copy, then it's a suicide machine.
Well. Scaling is obviously a seriously non-trivial issue, but I failed to note the point at which it just stops working. Further, our bodies aren't some fixed works of art where every molecule is forever fixed - we're a walking Ship of Theseus. And if you've just made a copy, then if the first is destroyed on the copy process, I don't see how that's particularly relevant, let alone a reason why it can't be done.
Kaku actually vacillates on this a little in 1.4 - despite teleportation being in section 1, he says viruses and cells are Class 1 impossibilities and might be available within the century, human teleportation is Class 2 and would take several centuries 'if it is possible at all' ... 'although it is allowed by the laws of physics.'
See item 9 for a related issue.
3. Time machine
They start off by noting that it is actually possible, then quote Kaku himself to say that it would require too much energy. And that it sets up paradoxes that cannot be resolved.
Well, time travel is a difficult one to be sure, and yet ... they're rather vague. They don't actually say that they're only considering travelling back in time, though it is implied. And yet the fact that relativity says that time is relative and we actively use this fact in many technologies today seems to have bypassed them. The grandfather paradox is certainly a very interesting possibility, though others exist.
Kaku has this listed out in 2.2, saying that it might take centuries in order for this to become possible, and would require a ToE. So other than coming to the opposite conclusion and saying it's impossible, the article pretty much echoes Kaku's points.
4. Faster than light travel
There are a lot of good points in this one, from the causation violations, to the energy requirements, to the results of using an Alcubierre drive. I don't know that there's any one specific thing that really works against fittling, but there certainly are a lot of obstacles. Much as a lack of supra-light travel really puts a damper on a lot of neat space travel, I have to admit there are a lot of problems.
And yes, by getting arbitrarily close to light you can get anywhere in near zero time, but it's not the same thing.
Kaku discusses FTL in 2.1 and puts it in the realm of Class 3 civilisations, saying it might be a millennium away, due to the physics required and energy expenditure. Particle accelerators 10 LY long? I wonder whose budget sheet that gets written on.
5. Generation ships
The fundamental problem is that apparently the resource and materials would be too great for any ship, though they see no problem with suspended animation.
Resource and materials is a challenge, it's not something that's impossible to overcome. Scale and effort is one thing (you're heading to the stars!) but I don't see why it couldn't be done. Of course, as with many of these items, there are other technologies that are likely to come along and mean that these are redundant, but I certainly don't see why a Generation Ship has a "fundamental problem."
Of amusement, this section says that because of the issues of Generation Ships, suspended animation would be a "much more reasonable solution." Item 8, why are you placed so far from this one?
Kaku talks about starships in 1.9, placing them as just Class 1, though he only touches on the issues of generation ships for a sentence or two before saying it would be easier to go with suspended animation, or nanoships.
6. Gravitational Shielding
The section states explicitly that it's impossible due to violation of physics.
Kaku talks about force fields in 1.1, but I can't find any discussion of anti-gravity, so I'm going to have to leave this one.
7. Personal force fields
Well, this one follows on from item 6, but says that whilst ships could have force fields, humans couldn't use one due to being fried by it. And it would have to use electromagnetism, since the other forces "are either way too weak or are constrained across short distances." Given those two objections, I'm not sure how he's comfortable with force fields for ships, though I suppose you could have other shielding to counteract the force field shielding. Not sure how that helps with the second point.
I suppose being on a ship could be easier since it's out in space surrounded by vacuum, whereas humans have adjacent material (the ground) that would cause issues.
But I digress.
Back to Kaku's section 1.1, where he actually uses a slew of technologies for the personal force field. Laser curtains, plasma windows, carbon nanotubes, photochromatics - each one designed to stop various incoming assaults. Within the century he thinks we'd have something that roughly fits the concept of the science fiction personal force field.
8. Reanimation from cryonic suspension
The objection here is that freezing cells damages cells, and you cannot then thaw them out without having killed them. This one hedges its bets a few times, saying things like it's not the theory, it's just the current method we use. And that other methods of reanimation might be possible, just not from freezing people.
This really just sounds like, current techniques are not good enough for a specific type of reanimation.
Not really sure why it needs to be this specific, except to try and make it less possible.
Kaku doesn't address this issue in Physics of the Impossible.
9. Continuity of consciousness after uploading
This one starts splitting hairs on the difference between uploading a mind ("distinct possibility") and the continuity of transferred consciousness ("open question").
Okay, splitting hairs is a little harsh. The continuity is a real problem. And it's a seriously intriguing thought process, that of the non-negligible transfer period of a mind ... the time of a partial mind.
And this one also raises the issue of teleportation (item 2), which they acknowledge. Though here, it seems to be that they are admitting some of the same issues arise, that of multiple copies insisting that they're all the genuine thing (personally I don't see why they aren't), but that doesn't seem to bother them with the possibility of it all here, it just irritates them with regard to the continuity of consciousness. In other words, now it's almost like it's "sure, you can copy people, but which one has the real mind?" and formerly it seemed to be "how could you copy people, you'd be destroying the original." And if you can copy the mind, such that you can run it digitally, then there's certainly no impediment to copying the body, should you so desire. Or even constructing something else you prefer. Like a fixed up body. Or a non-human body.
I'm surprised there was no mention of Kurzweil or Drexler here.
Kaku doesn't give this one much of a mention, but he touches on it in section 1.7 of Physics of the Impossible, which is about robots and AI. Towards the end he makes brief mention of the Singularity, and the possibility of either merging organics with silicon, or uploading minds. He refers to Moravec stating that it might be in a "distant future", which Kaku says is "not beyond the realm of possibility." Which is of course much more pessimistic than most of the transhumanists who are looking at a timeframe somewhere around the middle of the century for mind uploading and AI explosions.
10. Infinite data processing
The argument is that you can't live forever; you therefore can't process forever, or think forever. Eventually the universe either dies a heat-death or you hit a Big Crunch.
Well, you don't really need a lot of material for this one, do you? I mean, it's going to end one way or another, so processing and thinking has to come to an end, right?
Maybe.
Or maybe not.
Kaku doesn't address the matter in his book.
And I'm sure I don't have to remind people of Asimov's most excellent short story, The Last Question.
However, there's a very intriguing book by physicist Paul Davies called the Last Three Minutes. In the book, Davies discusses the two endpoints and whether or not an infinite number of thoughts could occur. For both scenarios he manages to come to an affirmative answer. If you want details, I'm afraid I'll have to refer you to the book though...
Well, that's a wrap.
It was fun digging out the old Kaku book.
Go read the article if you haven't already, since they probably explain their position better than I have.
Tuesday, May 29, 2012
The evolution of self checkout systems
It's interesting looking
at how the whole self checkout system at supermarkets has evolved since
its recent introduction (at least here in Oz).
The early systems were fairly simple.
Click to start. Swipe things through. Once you filled a bag, press New Bag, wait, then remove your filled bag onto the ground and continue swiping. Get to the end and select one of several options for payment.
Then there were the systems that allowed you to remove the bag when it was full without prior notification. Once it was removed, it asked you to confirm that you'd just removed the bag.
Now that confirmation of bag removal has gone, which, together with no need to press Start before scanning, leaves us with a smoother workflow.
This is the sort of change that could have been in the system from day one.
However, there are a few other aspects that have changed at the local, and I'm unsure as to whether they could have been as effectively rolled out on introduction.
In some ways, they seem to make the system a little more complex in order to offer more features / flexibility.
I mean, it seems intuitive enough now. But is that just because I (and other shoppers) have had some time to become acclimatised to the system? Always difficult to tell when you're trying to recall what things were like the first few times you encountered them.
Now we have the option of cash out or shopping to begin with, reintroducing that top level menu prior to swiping.
End level menus have changed upon completion of shopping. I think this gives a fast and flexible workflow, but I also think it's introduced a couple of extra menu levels that weren't there before.
Certainly I'm liking where self serve checkouts have gotten to and where they are going, but it has given me some pause as to whether you can just roll out the wonderful end system on introduction, rather than going through years of modification to get there. It's not a technical issue, it's a people process issue.
Maybe you can just introduce ultimate system on day zero. But maybe you can't, quite.
The early systems were fairly simple.
Click to start. Swipe things through. Once you filled a bag, press New Bag, wait, then remove your filled bag onto the ground and continue swiping. Get to the end and select one of several options for payment.
Then there were the systems that allowed you to remove the bag when it was full without prior notification. Once it was removed, it asked you to confirm that you'd just removed the bag.
Now that confirmation of bag removal has gone, which, together with no need to press Start before scanning, leaves us with a smoother workflow.
This is the sort of change that could have been in the system from day one.
However, there are a few other aspects that have changed at the local, and I'm unsure as to whether they could have been as effectively rolled out on introduction.
In some ways, they seem to make the system a little more complex in order to offer more features / flexibility.
I mean, it seems intuitive enough now. But is that just because I (and other shoppers) have had some time to become acclimatised to the system? Always difficult to tell when you're trying to recall what things were like the first few times you encountered them.
Now we have the option of cash out or shopping to begin with, reintroducing that top level menu prior to swiping.
End level menus have changed upon completion of shopping. I think this gives a fast and flexible workflow, but I also think it's introduced a couple of extra menu levels that weren't there before.
Certainly I'm liking where self serve checkouts have gotten to and where they are going, but it has given me some pause as to whether you can just roll out the wonderful end system on introduction, rather than going through years of modification to get there. It's not a technical issue, it's a people process issue.
Maybe you can just introduce ultimate system on day zero. But maybe you can't, quite.
Sunday, May 27, 2012
Named return values in C++11 alternative function syntax declarations
C++11 introduced the ability to specify functions using a
different method to the one that's traditional in C and C++03. This
notation is known as alternative function syntax, and involves placing
the return type at the end of the function signature rather than at the
start.
bool read(int itemIndex); // traditional.
auto read(int itemIndex) -> bool; // alternative function syntax.
Now, what bothers me is why can I not specify the name of the return value using c++11 alternative function syntax? Is that not a major oversight?
The parsing should be trivial, more so than allowing it in standard function syntax.
Here's what I would have liked to have written:
auto multiply(int x, int y) -> int product;
auto read(int itemIndex) -> bool success;
Of course, there's nothing that stops me now just adding this as a comment after the statement:
auto read(int itemIndex) -> bool; // returns success, or not.
But then if you're going to claim that, then you presumably would have been quite comfortable with them not allowing you to specify argument names in traditional declarations.
bool read(int /* the item index */); // returns success, or not.
The above would of course be undesirable, since the declaration gives the caller information as to what is being passed and what is being returned, which goes beyond the types.
So if we added these names to the return value in the declaration, is everything well and good, with no complications? As it turns out, no, this would indeed raise several issues of its own.
If you have the notation in the declaration, why not have it in the definition? You have parameter names in the definition - you have to, if you're going to use them. So if you supply the name of the return value in the definition, then presumably your function body is going to refer to that name. Which is fine in a sense, since you have to return something and now you've already got a name for it.
But in that case, how is it defined? What if the type that you are returning is an object with no default constructor? Suddenly there are multiple opportunities for the user to have to construct / assign an object to the return value name. Should they do this the way they would normally, but be forced to select the name that's provided in the function declaration? If the value was a simple native type, it would be almost sad that you couldn't just write
success = true;
return success;
or even more succinctly, as you do now,
return true;
One solution would be to allow for unnamed return values, as in the second case, which are automatically assigned to the return name (or essentially elided).
Another is to allow the use of a naked return statement in all cases, and require the compiler to detect uninitialised use of the return value.
success = true;
return;
This is starting to seem a little unintuitive, though perhaps it's just because it's deviating a fair bit from previous practice.
So having the caller see the name of the return value is as useful as seeing the name of the parameters they're supplying, but it may not be a failure in the alternative function syntax given the non-trivial issues that it raises.
I wonder if the issues it raises is why it never made it to the Standard.
bool read(int itemIndex); // traditional.
auto read(int itemIndex) -> bool; // alternative function syntax.
Now, what bothers me is why can I not specify the name of the return value using c++11 alternative function syntax? Is that not a major oversight?
The parsing should be trivial, more so than allowing it in standard function syntax.
Here's what I would have liked to have written:
auto multiply(int x, int y) -> int product;
auto read(int itemIndex) -> bool success;
Of course, there's nothing that stops me now just adding this as a comment after the statement:
auto read(int itemIndex) -> bool; // returns success, or not.
But then if you're going to claim that, then you presumably would have been quite comfortable with them not allowing you to specify argument names in traditional declarations.
bool read(int /* the item index */); // returns success, or not.
The above would of course be undesirable, since the declaration gives the caller information as to what is being passed and what is being returned, which goes beyond the types.
So if we added these names to the return value in the declaration, is everything well and good, with no complications? As it turns out, no, this would indeed raise several issues of its own.
If you have the notation in the declaration, why not have it in the definition? You have parameter names in the definition - you have to, if you're going to use them. So if you supply the name of the return value in the definition, then presumably your function body is going to refer to that name. Which is fine in a sense, since you have to return something and now you've already got a name for it.
But in that case, how is it defined? What if the type that you are returning is an object with no default constructor? Suddenly there are multiple opportunities for the user to have to construct / assign an object to the return value name. Should they do this the way they would normally, but be forced to select the name that's provided in the function declaration? If the value was a simple native type, it would be almost sad that you couldn't just write
success = true;
return success;
or even more succinctly, as you do now,
return true;
One solution would be to allow for unnamed return values, as in the second case, which are automatically assigned to the return name (or essentially elided).
Another is to allow the use of a naked return statement in all cases, and require the compiler to detect uninitialised use of the return value.
success = true;
return;
This is starting to seem a little unintuitive, though perhaps it's just because it's deviating a fair bit from previous practice.
So having the caller see the name of the return value is as useful as seeing the name of the parameters they're supplying, but it may not be a failure in the alternative function syntax given the non-trivial issues that it raises.
I wonder if the issues it raises is why it never made it to the Standard.
Monday, May 21, 2012
On Emergency Numbers
What is the ideal emergency number?
There are different ones for different countries, such as 112 for New Zealand and 911 for the USA. But these are all fairly arbitrary numbers.
911 really is no better than 713.
It seems to me that the ideal emergency number would be something like 000, which is what Oz uses.
You couldn't use 0 by itself, as that would be too trigger-prone.
00 is a possibility, and probably a strong one.
Using four zeroes is another, but we're starting to get into the realm of an arbitrary number of zeroes now, and arbitrariness is what I was trying to avoid.
Having a string of zeroes of some length seems to be the only reasonable solution, but is this just a case of culture bias, from living in Australia with such a system?
There are different ones for different countries, such as 112 for New Zealand and 911 for the USA. But these are all fairly arbitrary numbers.
911 really is no better than 713.
It seems to me that the ideal emergency number would be something like 000, which is what Oz uses.
You couldn't use 0 by itself, as that would be too trigger-prone.
00 is a possibility, and probably a strong one.
Using four zeroes is another, but we're starting to get into the realm of an arbitrary number of zeroes now, and arbitrariness is what I was trying to avoid.
Having a string of zeroes of some length seems to be the only reasonable solution, but is this just a case of culture bias, from living in Australia with such a system?
Friday, April 13, 2012
Symphony of Legends
So tonight the Melbourne Symphony Orchestra together with the Concordis
Chamber Choir performed music from many contemporary video game
favourites. Plenty of stuff from Blizzard with various selections from
Warcraft and Starcraft, along with Uncharted 3, the Bioshock games,
Skyrim and plenty more. Above and behind the orchestra were large
screens showing off gameplay and cinematics from the games being
played.
Oh, seeing the Protoss on the big screen!
Oh, seeing Diablo on the big screen!
Speaking of which, we heard Diablo 3 music! We saw Diablo 3 footage! It was all pretty glorious. Of course, a number of people were pulling out phones at that point for a snap or two.
The general lighting effects during the pieces were quite well done. There were all manner of colours and arrangements dancing along the ceiling, along the walls and out of the side spotlights.
The night was hosted by Wil Wheaton, Scott Kurtz and Kris Straub. From the description of the event it sounded like Wil would "make an appearance" (I took this to mean pop in and say hi halfway through then promptly leave) but Wil actually was there the entire night.
I wasn't expecting the guys to pop in between each different game, but they did a great job with highlighting not just the particular game and what it brought to the experience, but also a brief background on who composed the music and any interesting anecdotes from its creation.
Soul Caliber capped off the end of the night. Oh my, you forget how exaggerated those female characters are until they're on the big screen in all their beyond-comic-book-art glory. Footage from Soul Caliber, and most of the games in general, tended to be from more recent releases since it doesn't take too many years before the sort of thing you're displaying starts to look rather prehistoric.
One of the best moments of the night was actually the Soul Calibur conclusion, where two of the top aussie players dueled it out, with the in-match music being played live by the orchestra. V1p3r played off against Woody and man, Woody was on fire. Must have cleared the first two fights near perfect. Lost the next, but won after that. The crowd was very enthusiastic between rounds. It has me thinking about seeing the upcoming gaming exhibition.
Disappointments?
No Tristram theme. I've actually listened to this a number of times whilst working - and of course I've hit it up on youtube right now - so I was kinda thinking this would get played. Everyone loves the Tristram theme. Hell, they even talked about the Tristram theme when discussing the secrecy regarding the recording of the Diablo 3 music. Oh my god, I'm having serious flashbacks to standing around for ages with red and blue potions thinking how screwed I was and how I was running out of portal scrolls and I couldn't afford anything at those ripoff shops.
Oh, and no Guile theme. But I suppose, realistically, deep down inside, I knew this was never going to get played.
All in all, a fantastic night. Topped with ice-cream and apple crumble and black russians. But no Wil Wheaton photo (damn you, battery) and no Wil Wheaton signed poster (damn you, hindsight).
Oh, seeing the Protoss on the big screen!
Oh, seeing Diablo on the big screen!
Speaking of which, we heard Diablo 3 music! We saw Diablo 3 footage! It was all pretty glorious. Of course, a number of people were pulling out phones at that point for a snap or two.
The general lighting effects during the pieces were quite well done. There were all manner of colours and arrangements dancing along the ceiling, along the walls and out of the side spotlights.
The night was hosted by Wil Wheaton, Scott Kurtz and Kris Straub. From the description of the event it sounded like Wil would "make an appearance" (I took this to mean pop in and say hi halfway through then promptly leave) but Wil actually was there the entire night.
I wasn't expecting the guys to pop in between each different game, but they did a great job with highlighting not just the particular game and what it brought to the experience, but also a brief background on who composed the music and any interesting anecdotes from its creation.
Soul Caliber capped off the end of the night. Oh my, you forget how exaggerated those female characters are until they're on the big screen in all their beyond-comic-book-art glory. Footage from Soul Caliber, and most of the games in general, tended to be from more recent releases since it doesn't take too many years before the sort of thing you're displaying starts to look rather prehistoric.
One of the best moments of the night was actually the Soul Calibur conclusion, where two of the top aussie players dueled it out, with the in-match music being played live by the orchestra. V1p3r played off against Woody and man, Woody was on fire. Must have cleared the first two fights near perfect. Lost the next, but won after that. The crowd was very enthusiastic between rounds. It has me thinking about seeing the upcoming gaming exhibition.
Disappointments?
No Tristram theme. I've actually listened to this a number of times whilst working - and of course I've hit it up on youtube right now - so I was kinda thinking this would get played. Everyone loves the Tristram theme. Hell, they even talked about the Tristram theme when discussing the secrecy regarding the recording of the Diablo 3 music. Oh my god, I'm having serious flashbacks to standing around for ages with red and blue potions thinking how screwed I was and how I was running out of portal scrolls and I couldn't afford anything at those ripoff shops.
Oh, and no Guile theme. But I suppose, realistically, deep down inside, I knew this was never going to get played.
All in all, a fantastic night. Topped with ice-cream and apple crumble and black russians. But no Wil Wheaton photo (damn you, battery) and no Wil Wheaton signed poster (damn you, hindsight).
Friday, April 6, 2012
Antonym fun
There are many words in english that you can negate by adding a prefix
such as un-, il-, non-, etc. However, several of these words are much
more common in their negation than their root form. It's always fun
inserting a few of these uncommon root forms into speech:
- Complain about maculate code.
- Refer to the person who screws things up as peccable.
- Rail against nocent government policies.
- Compliment people by telling them that they look kempt.
There was a short story that I recall reading quite some years ago that was based around the use of such root words. Alas I cannot recall the title or the author or I'd link it from here.
The page below relates a few other interesting words in the same vein:
- The opposite of indefatigable is just fatigable (both in- and de- are dropped).
- The antonym of incline is disincline, not decline ("I was inclined to follow his suggestion", "I was disinclined to follow his suggestion")
Oh, the English language: travesty or endless supply of riches?
http://www.rinkworks.com/words/negatives.shtml
- Complain about maculate code.
- Refer to the person who screws things up as peccable.
- Rail against nocent government policies.
- Compliment people by telling them that they look kempt.
There was a short story that I recall reading quite some years ago that was based around the use of such root words. Alas I cannot recall the title or the author or I'd link it from here.
The page below relates a few other interesting words in the same vein:
- The opposite of indefatigable is just fatigable (both in- and de- are dropped).
- The antonym of incline is disincline, not decline ("I was inclined to follow his suggestion", "I was disinclined to follow his suggestion")
Oh, the English language: travesty or endless supply of riches?
http://www.rinkworks.com/words/negatives.shtml
Saturday, March 31, 2012
Top 3 books read in 2011
Yes, it's rather late for this sort of thing: it's the last day of March
and therefore the final day of the first quarter of 2012. Somehow
drawing attention to the fact that it's still the first quarter of the
new year makes it seem like this isn't really as late as it is.
I didn't pay that much attention last year to the books I read, so it's all going to be a bit vague, but here, briefly, are the three best books I read last year:
In order of reading.
The Quantum Thief, by Hannu Rajaniemi (2010)
This one stands out a little for me, selection wise, in that it was a very recent publication. Though I have a bit of a bent these days for modern SF, that normally means reading something from the last couple of decades.
There was a fair bit of hype on this debut and it doesn't fail to deliver. Hannu writes a very dense, very hard post singularity sci-fi novel that throws some brilliant concepts around some pretty cool characters. Post-humans abound, working with even more powerful entities, in and around the solar system perhaps a few centuries hence. Contains detectives, thieves, gaming culture and a whole lot of musings on the implications of future tech.
Looking forward to the sequel coming out later this year, but will need a reread of the Quantum Thief beforehand, methinks.
Trivia: contains a recommendation on the cover by Charles Stross, who wrote the very next book I read and is next up on this list.
Accelerando, by Charles Stross (2005)
Selected by me for a bookclub, since I'd been meaning to get around to Stross at some point and this one received some notoriety. The book is a collection of 9 continuing tales, though it didn't suffer for that. Taken together, they form a novel focusing on a man at the cutting edge of technology, and his descendents, as they head into and then through the technological Singularity, beginning in roughly the present day. Slashdot makes good background reading for this (and even gets a mention), especially for the first third that deals with the ramp up to the Singularity.
Probably has the highest density of ideas to pages of any book I can recall reading. Stross throws down concepts and references without even bothering to stop and see if the reader is still with him because he's already bringing something else up.
Pushing Ice, by Alastair Reynolds (2005)
I'd picked up a slew of books by Reynolds at a garage sale recently and had rather expected that the first Reynolds books that I'd read would be the Revelation Space series. The first book in that series was in fact sitting in my short-short list of books awaiting to be read, whereas this book had languished in a more general pool of books: the second tier, as it were. Still, for whatever reason this book was picked up in the dying weeks of the year and wham! it delivers emphatically with a strong plot and characters, along with a high degree of page-turning readability. It has an adventure feel to it, and is quite reminiscent of Clarke's Rama series. A crew of comet-miners are in the vicinity of an unknown alien object that's discovered and they are dispatched to investigate. Cue humans interacting with cool alien tech.
General Notes
All three books are the only novels I've read by those authors, which also helps me tick off on another list a couple of the authors that I wanted to read something - anything - by.
All of the books are sci-fi, and in fact all are hard sci-fi.
All of the books are quite recent, being published within the previous 6 years.
Only one book (The Quantum Thief) is set in a specific time - the other two span non-trivial ranges.
All of the authors are European, and two of the three are from the UK.
Comments on the above novels are welcome, as well as your own favourite reads from last year, whatever the genres.
I didn't pay that much attention last year to the books I read, so it's all going to be a bit vague, but here, briefly, are the three best books I read last year:
In order of reading.
The Quantum Thief, by Hannu Rajaniemi (2010)
This one stands out a little for me, selection wise, in that it was a very recent publication. Though I have a bit of a bent these days for modern SF, that normally means reading something from the last couple of decades.
There was a fair bit of hype on this debut and it doesn't fail to deliver. Hannu writes a very dense, very hard post singularity sci-fi novel that throws some brilliant concepts around some pretty cool characters. Post-humans abound, working with even more powerful entities, in and around the solar system perhaps a few centuries hence. Contains detectives, thieves, gaming culture and a whole lot of musings on the implications of future tech.
Looking forward to the sequel coming out later this year, but will need a reread of the Quantum Thief beforehand, methinks.
Trivia: contains a recommendation on the cover by Charles Stross, who wrote the very next book I read and is next up on this list.
Accelerando, by Charles Stross (2005)
Selected by me for a bookclub, since I'd been meaning to get around to Stross at some point and this one received some notoriety. The book is a collection of 9 continuing tales, though it didn't suffer for that. Taken together, they form a novel focusing on a man at the cutting edge of technology, and his descendents, as they head into and then through the technological Singularity, beginning in roughly the present day. Slashdot makes good background reading for this (and even gets a mention), especially for the first third that deals with the ramp up to the Singularity.
Probably has the highest density of ideas to pages of any book I can recall reading. Stross throws down concepts and references without even bothering to stop and see if the reader is still with him because he's already bringing something else up.
Pushing Ice, by Alastair Reynolds (2005)
I'd picked up a slew of books by Reynolds at a garage sale recently and had rather expected that the first Reynolds books that I'd read would be the Revelation Space series. The first book in that series was in fact sitting in my short-short list of books awaiting to be read, whereas this book had languished in a more general pool of books: the second tier, as it were. Still, for whatever reason this book was picked up in the dying weeks of the year and wham! it delivers emphatically with a strong plot and characters, along with a high degree of page-turning readability. It has an adventure feel to it, and is quite reminiscent of Clarke's Rama series. A crew of comet-miners are in the vicinity of an unknown alien object that's discovered and they are dispatched to investigate. Cue humans interacting with cool alien tech.
General Notes
All three books are the only novels I've read by those authors, which also helps me tick off on another list a couple of the authors that I wanted to read something - anything - by.
All of the books are sci-fi, and in fact all are hard sci-fi.
All of the books are quite recent, being published within the previous 6 years.
Only one book (The Quantum Thief) is set in a specific time - the other two span non-trivial ranges.
All of the authors are European, and two of the three are from the UK.
Comments on the above novels are welcome, as well as your own favourite reads from last year, whatever the genres.
Thursday, March 8, 2012
Visual C++ 11 Beta 1 Initial Thoughts
I've started giving Visual C++ 11 a bit of a workout, and there's much to like about it.
Stability might be improved over 10 - I'll withhold judgement on that one for the time being, however.
We've got range based for loops. That's a nice improvement since the DP last year. Now you can do things such as :
Pity there's such a mess with the Task List comments not working. I can't get a single one to show.
And I won't even elaborate on the fact that it says it will only report on the ones in open files. Sheesh: I've got a solution. Feel free to inspect it.
The support for a light and dark theme is nice. A little interesting that it's just a two horse event, but still. Alas, I've kinda shot myself in the foot here by overriding some of the backgrounds in order to force a dark background - I'll have to undo some of that in order to get this fully working (should I want a light theme, and I might some day if I coded when the sun was up).
I really like the fully fleshed out syntax highlighting. Though I think I loved it more in the Developer Preview. I'm sure I used to be able to set things as italics - the example of parameters being in italics is mentioned across the web, however the Beta most definitely doesn't have an italics setting, just bold. I have a function definition in front of me that's never known the loving hand of italicisation.
Oh, the monochrome look. That's right. For those who haven't heard, the Visual Studio now uses about three shades of grey as its entire colour palette. Outside of your code, that really isn't an exaggeration. There's a dark grey, a light grey and a - wait for it - intermediate grey. Icons have been drained of colour and general boxing and lines have been removed from dialogs to give them a slabs-of-grey-paint look.
It was a little WTF at first. You start thinking that this is a feature they'll introduce for VS 11 in order to have colour introduced as a feature in VS 12. But I have to say that after the initial surprise, you get used to it pretty quick. It's distinctive. Better, worse? Not much in it for me at this point.
Microsoft have folded a lot of the VS 2010 Productivity Power Tools into the VS 11 release. The theory goes they get some out-of-band feedback on enhancements and iterate them, then add them into the next release - and here we are. Alas, one of my favourites was the document view scrollbar. And since I don't have the Power Tool available for VS 11, that means I'm without the enhanced scrollbar altogether.
I don't have a picture handy of the VS 2010 Power Tool one, so here's an image taken of the scrollbar from the Sublime Text editor home page. Very much the same sort of thing: I love being able to see the general structure, as well as search matches, bookmarks, breakpoints, etc. Sublime's not a bad text editor btw, though I tend to prefer notepad++.
Overall, Visual Studio seems pretty punchy and responsive, though the find all references is as slow as ever.
Note that the Beta that you download is Ultimate. Which in some ways is a pity because that means it comes with a boatload of stuff that I am never going to see in day to day usage (since I'd be using something like Professional). Still, it does give you an opportunity to see what you're missing, including the perf tools and architecture stuff.
There are quite a few changes, both in the IDE and in the language support, so it will take quite some time to really go through and explore what's there. This has just been a quick write up to gather some thoughts, prompted by the discovery of some unexpected C++11 support in the form of range based for loops.
Stability might be improved over 10 - I'll withhold judgement on that one for the time being, however.
We've got range based for loops. That's a nice improvement since the DP last year. Now you can do things such as :
#includeIn the above code we can simply declare bob as a deduced type of the container joe. Handy when you're iterating over your containers and don't want to repeatedly deal with dereferencing (sorry, indirecting through, according to N3362) an iterator.
auto main()
-> int
{
// TODO - discover why this doesn't show up on Task List.
std::vectorjoe;
for (auto bob : joe)
{
}
}
Pity there's such a mess with the Task List comments not working. I can't get a single one to show.
And I won't even elaborate on the fact that it says it will only report on the ones in open files. Sheesh: I've got a solution. Feel free to inspect it.
The support for a light and dark theme is nice. A little interesting that it's just a two horse event, but still. Alas, I've kinda shot myself in the foot here by overriding some of the backgrounds in order to force a dark background - I'll have to undo some of that in order to get this fully working (should I want a light theme, and I might some day if I coded when the sun was up).
I really like the fully fleshed out syntax highlighting. Though I think I loved it more in the Developer Preview. I'm sure I used to be able to set things as italics - the example of parameters being in italics is mentioned across the web, however the Beta most definitely doesn't have an italics setting, just bold. I have a function definition in front of me that's never known the loving hand of italicisation.
Oh, the monochrome look. That's right. For those who haven't heard, the Visual Studio now uses about three shades of grey as its entire colour palette. Outside of your code, that really isn't an exaggeration. There's a dark grey, a light grey and a - wait for it - intermediate grey. Icons have been drained of colour and general boxing and lines have been removed from dialogs to give them a slabs-of-grey-paint look.
It was a little WTF at first. You start thinking that this is a feature they'll introduce for VS 11 in order to have colour introduced as a feature in VS 12. But I have to say that after the initial surprise, you get used to it pretty quick. It's distinctive. Better, worse? Not much in it for me at this point.
Microsoft have folded a lot of the VS 2010 Productivity Power Tools into the VS 11 release. The theory goes they get some out-of-band feedback on enhancements and iterate them, then add them into the next release - and here we are. Alas, one of my favourites was the document view scrollbar. And since I don't have the Power Tool available for VS 11, that means I'm without the enhanced scrollbar altogether.
I don't have a picture handy of the VS 2010 Power Tool one, so here's an image taken of the scrollbar from the Sublime Text editor home page. Very much the same sort of thing: I love being able to see the general structure, as well as search matches, bookmarks, breakpoints, etc. Sublime's not a bad text editor btw, though I tend to prefer notepad++.
Overall, Visual Studio seems pretty punchy and responsive, though the find all references is as slow as ever.
Note that the Beta that you download is Ultimate. Which in some ways is a pity because that means it comes with a boatload of stuff that I am never going to see in day to day usage (since I'd be using something like Professional). Still, it does give you an opportunity to see what you're missing, including the perf tools and architecture stuff.
There are quite a few changes, both in the IDE and in the language support, so it will take quite some time to really go through and explore what's there. This has just been a quick write up to gather some thoughts, prompted by the discovery of some unexpected C++11 support in the form of range based for loops.
Tuesday, February 14, 2012
On Latin and English and Mappings
It's illuminating how much of another language you can pick up if you've got a reasonable grasp of a related one.
And in the case of languages, there are a whole heap of related ones.
Now, I'm not talking about reading books, or even grasping full sentences. But pithy quotes? Not a total write-off.
The spark for this post was a Latin quote by a mate,
"Omnia mutantur, nihil interit"
But the origin of this post can be found some time ago.
When looking into the artificial language Ido, I came across a handy table describing words and the closest match in several languages.
One example was the noun, "kavalo", which in english means "horse". So far, so confusing. But the closest match shows that we can (imperfectly) map it to the concept of "cavalry", and this is indeed how I remember the word (along with the fact that nouns in Ido end in -o).
Fast forward to this evening and we have this latin phrase "Omnia mutantur, nihil interit". Now, I'm not of an age where I learnt latin in school. I'm also not of the bent that I decided to learn a dead language of my own accord. I must admit I'm fond of the occasional latin phrase or word (readers will recall my hunt for ex-situ's antonym) but, all things told, I really don't know any latin worth mentioning.
The phrase was posted without an english translation, perhaps because it was assumed to be well known enough to not require it. Alas, that was not the case for me. And yet, looking over the phrase, mappings immediately became evident.
Omnia: omni- (prefix for all, everything omniscient, omnivore)
mutantur: mutate (to change)
nihil: nil (nothing)
interit: ... ( ...? )
So close, stymied at the last.
Now, I'm vaguely familiar with the english translation, but even with the first three words I was unable to complete the phrase and had to resort to google. "Everything changes, nothing is lost" for those of you still wondering.
So the question becomes, where is the link between the latin "interit" and the english "to lose, perish, decay"
The closest I can get is intermit, and that's not very close at all.
Anyone?
And in the case of languages, there are a whole heap of related ones.
Now, I'm not talking about reading books, or even grasping full sentences. But pithy quotes? Not a total write-off.
The spark for this post was a Latin quote by a mate,
"Omnia mutantur, nihil interit"
But the origin of this post can be found some time ago.
When looking into the artificial language Ido, I came across a handy table describing words and the closest match in several languages.
One example was the noun, "kavalo", which in english means "horse". So far, so confusing. But the closest match shows that we can (imperfectly) map it to the concept of "cavalry", and this is indeed how I remember the word (along with the fact that nouns in Ido end in -o).
Fast forward to this evening and we have this latin phrase "Omnia mutantur, nihil interit". Now, I'm not of an age where I learnt latin in school. I'm also not of the bent that I decided to learn a dead language of my own accord. I must admit I'm fond of the occasional latin phrase or word (readers will recall my hunt for ex-situ's antonym) but, all things told, I really don't know any latin worth mentioning.
The phrase was posted without an english translation, perhaps because it was assumed to be well known enough to not require it. Alas, that was not the case for me. And yet, looking over the phrase, mappings immediately became evident.
Omnia: omni- (prefix for all, everything omniscient, omnivore)
mutantur: mutate (to change)
nihil: nil (nothing)
interit: ... ( ...? )
So close, stymied at the last.
Now, I'm vaguely familiar with the english translation, but even with the first three words I was unable to complete the phrase and had to resort to google. "Everything changes, nothing is lost" for those of you still wondering.
So the question becomes, where is the link between the latin "interit" and the english "to lose, perish, decay"
The closest I can get is intermit, and that's not very close at all.
Anyone?
Thursday, January 19, 2012
SOPA Oddity
SOPA Oddity
MPAA to User One
MPAA to User One
Take your videos and put your labels on
(Ten) MPAA (Nine) to User One (Eight)
Commencing upload, servers on
Check attribution and may God's love be with you.
This is User One under control, I'm looking at my screen
And I'm surfing in a most peculiar way
And the sites look very different today
Here am I sitting at a terminal, disconnected from the world
Lady Liberty is blue and there's nothing I can do
This is MPAA to User One, you really must obey
And the papers want to know whose files you share
Now it's time to post your comments if you dare
Though I'm past one hundred thousand views, I'm feeling very scared
And I think my blog just knows which links to post
Tell my ISP I love it much, it knows
Subscriber mail to User One, your server's dead, there's something wrong
Can you hear me, User One?
Can you hear me, User One?
Can you hear me, User One?
Can you...
Here am I sitting at a terminal, disconnected from the world
Lady Liberty is blue but there's something I can do...
Sunday, January 15, 2012
Unfinished Series
I'm not the sort of person who's entirely comfortable starting a series that hasn't been completed. Many others seem to have no such issues, as demonstrated by the myriad people picking up books starting a new series within moments of them hitting shelves. Myself, I usually only start an unfinished (non-open) series if the books are a gift or a loan.
I'm pretty sure it comes down to the fact that if I really enjoy it, I'm looking to see where it goes and to get some closure. I don't have any problem with picking up books from an open ended series (Gor, Discworld) and reading them out of order. I may have a slight preference for reading earlier works in the series before later ones, but not a strong compulsion.
At the moment I've got two unfinished series on the go, both unintentional. Wait, that could be three if you count Rawn's The Ruins of Ambrai. But given it's been 15 years since the second book I think we can safely discount that one.
The first unfinished series I have going is the Penrose series by Tony Ballantyne. I picked up the first book when it hit a mass market paperback on pre-order discount on the basis of a neat blurb and put it in the short list not realising it was part of a trilogy. After reading several consecutive less than stellar books I needed something punchy, exciting and - above all - highly readable, so I grabbed Twisted Metal off the pile. The book delivered the much needed invigoration to my reading, but towards the end I started to realise that the story wasn't really wrapping up. Admittedly Ballantyne could have been taking a leaf out of Peter F Hamilton's book and leaving the start of the wrap-up for the final half a percent, but no, he's left it hanging for book two.
Penrose book two is already out (2010) and I've got a copy sitting in the short-short list, but I'm not seeing even a pre-order or title on the third book and the author described it late last year as "yet uncompleted". Here's hoping it doesn't become a Captal's Tower.
The second series I started accidentally was Hannu R's The Quantum Thief. The book was getting some great press after its release and billed as Hard SF. The novel itself is fantastic and flooded with killer concepts but again, upon reaching the end I found myself stung by the lack of closure. Not too much, I have to admit, as a lot did wrap up, but it certainly had me scouring the net for more information on the trilogy. Not as lucky here as I was with Ballantyne, as the second book is only due for release third quarter this year.
The question now is what to do about these series?
Well, specifically what to do about book two of Penrose. I've got it lying around, and it's been maybe a year since I read the first book. I thought maybe I'd read it about halfway between the first and the third book, but that's a little difficult to place, temporally speaking, when the third book hasn't been released. Currently thinking about reading it soonish so the first one hasn't faded, with the expectation that the third comes out sometime next year so the second won't have faded by then.
The Fractal Prince, sequel to The Quantum Thief, I'll probably end up buying on release (paperback Octavo and HB will be published about the same time) and do a reread of the first book beforehand given the fact that you only start getting an idea of what the hell is going on when you're most of the way through.
And the above doesn't even touch on the issues of authors bringing out revised editions of their books (waves to Stephen King) or the fact that I'm actually considering reading A Song of Ice and Fire despite the series conclusion being perhaps a decade away...
I'm pretty sure it comes down to the fact that if I really enjoy it, I'm looking to see where it goes and to get some closure. I don't have any problem with picking up books from an open ended series (Gor, Discworld) and reading them out of order. I may have a slight preference for reading earlier works in the series before later ones, but not a strong compulsion.
At the moment I've got two unfinished series on the go, both unintentional. Wait, that could be three if you count Rawn's The Ruins of Ambrai. But given it's been 15 years since the second book I think we can safely discount that one.
The first unfinished series I have going is the Penrose series by Tony Ballantyne. I picked up the first book when it hit a mass market paperback on pre-order discount on the basis of a neat blurb and put it in the short list not realising it was part of a trilogy. After reading several consecutive less than stellar books I needed something punchy, exciting and - above all - highly readable, so I grabbed Twisted Metal off the pile. The book delivered the much needed invigoration to my reading, but towards the end I started to realise that the story wasn't really wrapping up. Admittedly Ballantyne could have been taking a leaf out of Peter F Hamilton's book and leaving the start of the wrap-up for the final half a percent, but no, he's left it hanging for book two.
Penrose book two is already out (2010) and I've got a copy sitting in the short-short list, but I'm not seeing even a pre-order or title on the third book and the author described it late last year as "yet uncompleted". Here's hoping it doesn't become a Captal's Tower.
The second series I started accidentally was Hannu R's The Quantum Thief. The book was getting some great press after its release and billed as Hard SF. The novel itself is fantastic and flooded with killer concepts but again, upon reaching the end I found myself stung by the lack of closure. Not too much, I have to admit, as a lot did wrap up, but it certainly had me scouring the net for more information on the trilogy. Not as lucky here as I was with Ballantyne, as the second book is only due for release third quarter this year.
The question now is what to do about these series?
Well, specifically what to do about book two of Penrose. I've got it lying around, and it's been maybe a year since I read the first book. I thought maybe I'd read it about halfway between the first and the third book, but that's a little difficult to place, temporally speaking, when the third book hasn't been released. Currently thinking about reading it soonish so the first one hasn't faded, with the expectation that the third comes out sometime next year so the second won't have faded by then.
The Fractal Prince, sequel to The Quantum Thief, I'll probably end up buying on release (paperback Octavo and HB will be published about the same time) and do a reread of the first book beforehand given the fact that you only start getting an idea of what the hell is going on when you're most of the way through.
And the above doesn't even touch on the issues of authors bringing out revised editions of their books (waves to Stephen King) or the fact that I'm actually considering reading A Song of Ice and Fire despite the series conclusion being perhaps a decade away...
Subscribe to:
Posts (Atom)