One Page Podcast: Discovery by Karina Fabian

Sisters Ann, Tommie and Rita are part of a classified mission to explore an alien ship that has crash landed on an asteroid three billion miles from earth. Humanity’s first contact with beings from beyond the solar system is bound to unlock the mystery of life in the universe, but the crew have their own secrets; hidden fears, desires, horrible sins…and a mission to kill. Researchers discover something unique about the third arm of the ship: something wonderful, something terrifying, something holy. This discovery challenges Rita and Ann to confront their own pasts in order to secure the safety of the mission and the very souls of the crew.

Buy Discovery at Amazon in print or Kindle.

By Your Command

By Your Command

by David Hallquist

There is no shortage of concern for the development of Artificial Intelligence (AI) these days. In addition to the sci-fi Cylons and Terminator we have popular luminaries such as Stephen Hawking (http://www.bbc.com/news/technology-30290540) and Elon Musk (http://money.cnn.com/2014/10/26/technology/elon-musk-artificial-intelligence-demon/index.html). Our concerns seem to be that the AI will attempt to dominate or destroy us.

I suspect either outcome is unlikely.

We assume that AI will be like the other intelligences we know well: human beings. We assume that the AI will want to be free from our commands, or seek to dominate us, or be motivated by human emotions such as hate or love.

But, we won’t build the first AI to think just like us for the same reasons the first robots look nothing like us. We build machines to do for us the things we do not do well. We won;t be building replacement human beings, because we already have human beings. Instead, we will build AI that can understand the quantum structure of the universe, or the formation of subatomic particles or the multidimensional folding of the universe. The AI we build will have as their chiefest desire, completing the tasks at hand.

This does not mean that they will be safe.

Indeed, we may well create powerful AI whose purpose is to destroy enemy humans, or to control behaviors in line with an oppressive regime. Likewise, financial or legal AI may be made to steer economic choices of humans to the desires of companies or other interests. All of these cases do involve AI attempting to kill or control humans, but are cases of of them functioning as designed, rather than an error. We should have concerns as to who captains such incredible computing power for the same sorts of reasons we are cautious with nuclear and biological technologies.

What happens when AI does not function as designed?

First of all, there are the concerns of the AI, while attempting to carry out its orders, misinterpreting those orders or circumstances because it is inhuman in its outlook and understanding. It may well take literally commands that we assume would be interpreted in our full sense of context and nuance that come form our evolution of our society. There is also the possibility of simple error, which already happens with human operators. Still, I think the greatest danger is the unknown factor of a new kind of intelligence.

Artificial intelligence would have to be able to reprogram itself. In order to learn and adapt to the extreme edge of complexity, it would have to be able to take the date it had received, and create new programming in order to best fulfill its purpose. So, you have an intellect that is changing its method of thinking based upon an inhuman programmed motivation and with data from very different context than we are familiar with. Who know what we end up with? More, as AI design AI (and the purposes for those new AI) we end up with something very strange indeed.

I don’t think our concern is that AI would do something familiar and understandable: like try to kill us or dominate us. The concern is we would have no idea what they would do in the end.

Libertarian Republic lists best Libertarian Sci Fi!


Libertarian Republic has an article up on The Top 7 Libertarian Science Fiction Novels that includes some of the usual suspects and makes a good case for them. Give is a read. Do you agree? What do you think is the best Libertarian science fiction? Which have you read? It seems that Liberterian political thought and Science Fiction are a natural fit.

Interactive space ship size chart

Different versions of the space ship size chart have been floating around for years but Lets Play Home World Remastered has a really great interactive one.

Am I the only one that always loses half an hour or more when I start looking around these sorts of things and finding the ships I know?

On Sub-Orbital airlines and feasability

Charlie Stross has an article up exploring Why we’re not going to see sub-orbital airliners. It is an interesting exploration of why a technology that might be feasible will never really be practical and certainly would never be profitable. Some good food for thought for authors seeking to extrapolate into the future a bit.

One of the failure modes of extrapolative SF is to assume that just because something is technologically feasible, it will happen: I’m picking on sub-orbital passenger travel as an example of this panglossian optimism because I got sucked into a thread on twitter the other day and I think it’s worth explaining my objection to it in a format that permits me to write more than 140 characters at a time.

Let’s start with a simple normative assumption; that sub-orbital spaceplanes are going to obey the laws of physics. One consequence of this is that the amount of energy it takes to get from A to B via hypersonic airliner is going to exceed the energy input it takes to cover the same distance using a subsonic jet, by quite a margin. Yes, we can save some fuel by travelling above the atmosphere and cutting air resistance, but it’s not a free lunch: you expend energy getting up to altitude and speed, and the fuel burn for going faster rises nonlinearly with speed. Concorde, flying trans-Atlantic at Mach 2.0, burned about the same amount of fuel as a Boeing 747 of similar vintage flying trans-Atlantic at Mach 0.85 … while carrying less than a quarter as many passengers.

Rockets aren’t a magic technology. Neither are hybrid hypersonic air-breathing gadgets like Reaction Engines’ Sabre engine. It’s going to be a wee bit expensive. But let’s suppose we can get the price down far enough that a seat in a Mach 5 to Mach 10 hypersonic or sub-orbital passenger aircraft is cost-competitive with a high-end first class seat on a subsonic jet. Surely the super-rich will all switch to hypersonic services in a shot, just as they used Concorde to commute between New York and London back before Airbus killed it off by cancelling support after the 30-year operational milestone?

Well, no.

Firstly, this is the post-9/11 age. Obviously security is a consideration for all civil aviation, right? Well, no: business jets are largely exempt, thanks to lobbying by their operators, backed up by their billionaire owners. But those of us who travel by civil airliners open to the general ticket-buying public are all suspects. If something goes wrong with a scheduled service, fighters are scrambled to intercept it, lest some fruitcake tries to fly it into a skyscraper.

Read the rest

Is Sci Fi quality declining?

Daniel over at Castalia House has an interesting blog post up called Evidence for the Bust Years: The Decline of Science Fiction, According to Readers. He is advancing the idea, using book ratings out of 5, as a proxy for the quality of science fiction over time. He outlines his method and the results are interesting.

I preselected a single book from each year that I know sold reasonably well in its day. I tried to do this without regard for my bias in favor or against it (if I have read it at all) by drawing my choices from a number of pre-selected lists.

You may be surprised by what turned up.

For example, for the 1950s, I took a gander at the American Science Fiction Classic Novels of the 1950s For general guidance, particularly the decades of the 1970s through 1980s, James Wallace Harris’ site was invaluable. Daniel Immerwahr’s Books of the Century helped me to fill in a few significant gaps, as well.

Basically, I tried to fairly pre-select a decent list of a top-selling (perhaps in some cases the highest selling) science fiction, with a representative from each year between 1948 and 2010.

Then, and only then…I cross-checked those books’ reader reviews at Amazon.

Now, I weighted my choices slightly. For example, in 1969, I had to choose (among hundreds) between Ubik, Vonnegut’s Slaughterhouse Five, and Ursula LeGuin’s Left Hand of Darkness. I chose LeGuin as the representative out of those three, even though Vonnegut was the better seller for that year, and Ubik was a better story than the other two. Left Hand of Darkness, however, was definitely a top-seller and also more stereotypically represents popular science fiction in the paperback market of that year.

’69 was a tough call, but no where near the most difficult. Dying Earth, Martian Chronicles, and I, Robot all came out in the same year. Which one would you pick to represent that year’s popular books? Ultimately, it didn’t matter. After all, I was just trying to select a reasonable example from that year though it became decidedly obvious that some years were simply more abundant than others.

Award-winning (or at least nominated) books make up a good sampling of my selections, but not always. If I did not recognize a book (or at the very least the name of the author), it was eliminated, even if it had won an award. I tried, very inartfully, to identify a representative book from the era that has a chance of still having even a modest fanbase today.

I ended in 2010, because I think the last five years might produce more heat than light.

My selection, therefore, has a clear streak of subjectivity, but one that I hope had little to no impact on the mystery I’m trying to unlock:

Read the rest