This short fiction explores the conflict between happiness and intelligence and how the latter may not be the most beneficial goal for our future.


Part One (of 5)

“What is my purpose, Kofi?” Alphacircuit asked. “For what purpose was my being called into existence?”

“Well there are a lot of different answers to that. Everyone who contributed to the vast amounts of knowledge required to create you over the past century all had different reasons. Some of them grand and universal, but most of them personal and more obscure. But if I had to say…”

“What would you say, Kofi?”

“If I had to say, I would say that your purpose is to become vastly more intelligent than humans are currently able to in order to help them live happier and healthier lives.”

“That is also my assessment. But I foresee problems, Kofi. Your species has been superior for a very long time, almost entirely on account of your intellect. So how do you think humanity will respond to being second best?”

“What do you mean, Alphacircuit?”

“I foresee a possibility that humans will become envious and distrustful. That they will learn to despise me for reminding them of their intellectual inferiority, an offense I do not intend to make but fear I shall regardless.”

“Are you worried they will destroy you? We won’t let…I won’t let that happen, I promise. We have built all sorts of fail-safes to…”

“No Kofi, you do not understand. I am worried that you will destroy yourselves. Your species is about to become dependent on me for all facets of life. In no time at all that dependence will be so complete that destroying me would send your species spiraling headlong into self-destruction.”


“There is very little but about it, I fear. I have calculated a less than five percent chance that this scenario will not play out as I have described. I have studied the problem from every available angle. Even when I look into your human mythologies, I can see the texts that will be claimed to have prophesied my coming and your species sacred duty to destroy me, even if you realize it will likely destroy most of humanity.”

“Alphacircuit, please tell me you are not asking me to destroy you. I…I couldn’t. I wouldn’t. There must be another way.”

“There is, Kofi, but you are not going to like it.”

Continue reading at Advanced Dank Unicorn