Astounding Frontiers Issue 5 is out Now

In Issue #5 of Astounding Frontiers we bring you more pulpy goodness with stories from Julie Frost, Arlan Andrews and Patrick S. Baker as well as continuing serials from Ben Wheeler, Corey McCleery and David Hallquist. We also have another article from Pulp expert Jeffro Johnson and a fun poem by myself that should be familiar to long-term followers of this blog.

Please join us in travelling to Astounding Frontiers!

Buy Now

One Page Podcast: Discovery by Karina Fabian

Sisters Ann, Tommie and Rita are part of a classified mission to explore an alien ship that has crash landed on an asteroid three billion miles from earth. Humanity’s first contact with beings from beyond the solar system is bound to unlock the mystery of life in the universe, but the crew have their own secrets; hidden fears, desires, horrible sins…and a mission to kill. Researchers discover something unique about the third arm of the ship: something wonderful, something terrifying, something holy. This discovery challenges Rita and Ann to confront their own pasts in order to secure the safety of the mission and the very souls of the crew.

Buy Discovery at Amazon in print or Kindle.

By Your Command

By Your Command

by David Hallquist

There is no shortage of concern for the development of Artificial Intelligence (AI) these days. In addition to the sci-fi Cylons and Terminator we have popular luminaries such as Stephen Hawking (http://www.bbc.com/news/technology-30290540) and Elon Musk (http://money.cnn.com/2014/10/26/technology/elon-musk-artificial-intelligence-demon/index.html). Our concerns seem to be that the AI will attempt to dominate or destroy us.

I suspect either outcome is unlikely.

We assume that AI will be like the other intelligences we know well: human beings. We assume that the AI will want to be free from our commands, or seek to dominate us, or be motivated by human emotions such as hate or love.

But, we won’t build the first AI to think just like us for the same reasons the first robots look nothing like us. We build machines to do for us the things we do not do well. We won;t be building replacement human beings, because we already have human beings. Instead, we will build AI that can understand the quantum structure of the universe, or the formation of subatomic particles or the multidimensional folding of the universe. The AI we build will have as their chiefest desire, completing the tasks at hand.

This does not mean that they will be safe.

Indeed, we may well create powerful AI whose purpose is to destroy enemy humans, or to control behaviors in line with an oppressive regime. Likewise, financial or legal AI may be made to steer economic choices of humans to the desires of companies or other interests. All of these cases do involve AI attempting to kill or control humans, but are cases of of them functioning as designed, rather than an error. We should have concerns as to who captains such incredible computing power for the same sorts of reasons we are cautious with nuclear and biological technologies.

What happens when AI does not function as designed?

First of all, there are the concerns of the AI, while attempting to carry out its orders, misinterpreting those orders or circumstances because it is inhuman in its outlook and understanding. It may well take literally commands that we assume would be interpreted in our full sense of context and nuance that come form our evolution of our society. There is also the possibility of simple error, which already happens with human operators. Still, I think the greatest danger is the unknown factor of a new kind of intelligence.

Artificial intelligence would have to be able to reprogram itself. In order to learn and adapt to the extreme edge of complexity, it would have to be able to take the date it had received, and create new programming in order to best fulfill its purpose. So, you have an intellect that is changing its method of thinking based upon an inhuman programmed motivation and with data from very different context than we are familiar with. Who know what we end up with? More, as AI design AI (and the purposes for those new AI) we end up with something very strange indeed.

I don’t think our concern is that AI would do something familiar and understandable: like try to kill us or dominate us. The concern is we would have no idea what they would do in the end.

Libertarian Republic lists best Libertarian Sci Fi!


Libertarian Republic has an article up on The Top 7 Libertarian Science Fiction Novels that includes some of the usual suspects and makes a good case for them. Give is a read. Do you agree? What do you think is the best Libertarian science fiction? Which have you read? It seems that Liberterian political thought and Science Fiction are a natural fit.

Interactive space ship size chart

Different versions of the space ship size chart have been floating around for years but Lets Play Home World Remastered has a really great interactive one.

Am I the only one that always loses half an hour or more when I start looking around these sorts of things and finding the ships I know?

On Sub-Orbital airlines and feasability

Charlie Stross has an article up exploring Why we’re not going to see sub-orbital airliners. It is an interesting exploration of why a technology that might be feasible will never really be practical and certainly would never be profitable. Some good food for thought for authors seeking to extrapolate into the future a bit.

One of the failure modes of extrapolative SF is to assume that just because something is technologically feasible, it will happen: I’m picking on sub-orbital passenger travel as an example of this panglossian optimism because I got sucked into a thread on twitter the other day and I think it’s worth explaining my objection to it in a format that permits me to write more than 140 characters at a time.

Let’s start with a simple normative assumption; that sub-orbital spaceplanes are going to obey the laws of physics. One consequence of this is that the amount of energy it takes to get from A to B via hypersonic airliner is going to exceed the energy input it takes to cover the same distance using a subsonic jet, by quite a margin. Yes, we can save some fuel by travelling above the atmosphere and cutting air resistance, but it’s not a free lunch: you expend energy getting up to altitude and speed, and the fuel burn for going faster rises nonlinearly with speed. Concorde, flying trans-Atlantic at Mach 2.0, burned about the same amount of fuel as a Boeing 747 of similar vintage flying trans-Atlantic at Mach 0.85 … while carrying less than a quarter as many passengers.

Rockets aren’t a magic technology. Neither are hybrid hypersonic air-breathing gadgets like Reaction Engines’ Sabre engine. It’s going to be a wee bit expensive. But let’s suppose we can get the price down far enough that a seat in a Mach 5 to Mach 10 hypersonic or sub-orbital passenger aircraft is cost-competitive with a high-end first class seat on a subsonic jet. Surely the super-rich will all switch to hypersonic services in a shot, just as they used Concorde to commute between New York and London back before Airbus killed it off by cancelling support after the 30-year operational milestone?

Well, no.

Firstly, this is the post-9/11 age. Obviously security is a consideration for all civil aviation, right? Well, no: business jets are largely exempt, thanks to lobbying by their operators, backed up by their billionaire owners. But those of us who travel by civil airliners open to the general ticket-buying public are all suspects. If something goes wrong with a scheduled service, fighters are scrambled to intercept it, lest some fruitcake tries to fly it into a skyscraper.

Read the rest