Lately, a lot of the spam that CM has been receiving has proposed the use of AI-generated content to make the life of the writer/publisher easier, as though content creation was nothing more than the means to an end.

    The Flaw In The Argument

    Mankind has yet to build an artificial system that can pass the Turing Test. This is the proposal that you place an artificial system at one end of a communications link and a real person at the other, and let them interact; if the real person cannot tell that the ‘person’ on the other end is artificial, then it passes the test. (This, of course, is a simplistic overview of a far more complex subject; you can read more on the fascinating subject of how we would know if a computer was intelligent at Wikipedia: Turing Test – opens in a new tab as usual).

    I remain unconvinced that any machine / software that cannot pass the Turing test can write creatively with sufficient fidelity that a reader cannot tell the difference. This, to me, remains a fundamental flaw in the proposal.

    Quora Artificial Questions

    My opinion in this matter has been bolstered by a recent question on Quora, which asks Why are the questions being generated by [their new AI system,] the Quora Prompt Generator, so inane?

    A small selection of the many examples offered by the answerer clearly demonstrate the many problems:
     

    • Are there atheist crickets?
    • Does anyone use the letter Z anymore?
    • What is the name of the movie “Soylent Green”?
    • Is there a building in Venice?
    • Who wrote ‘Every Breath You Take’ by Sting?
    • Who played Cleopatra in the movie with Elizabeth Taylor and Richard Burton?
    • Why is psychology called the father of modern psychology?
    • Why does English only have one word for yes and no?
    • Can you send money to inmates at Walmart?
    • Why do some celebrities have last names?
    • Do bamboos get agitated easily?
    • How much sugar is too much tea?
    • Is Tokyo a foreign country?
    • Why is Paris not the capital of France?
    • Can a bucket of water put out the Sun?

     
    That is less than 1/4 of the total list of examples gathered by John James Morton in his answer to the question. He went so far as to give each a link to the actual question as asked by the “AI”. It’s as though it knows the rules of language, but not what any of the terms mean – so the question may have a reasonable form (e.g. “Does anyone use [object/subject] any more?”), but the semantic content is loony-tunes.

    To be fair, some of the questions are more reasonable, to the point where I have contemplated answering one or two – but for every example where it gets it “right”, there are half-dozen that are total zingers. Ultimately, though, you answer a question not to show off your knowledge, but because someone is interested either in the answer, or in your answer, and that is completely missing from responses to such questions.

    Quora Artificial Answers

    In reply, I made a facetious comment about matching the Quora Prompt Generator with an automated reply generator, as an indicator of how much effort would be justified in writing answers to questions such as these – to which another reader, Daniel Hamilton, replied: “Sadly, there already is at least one: Quora Answer Generator.” He also provided a link to back up the assertion.

    With both the generation of questions (bad ones), and the generation of answers to those questions (presumably bad ones), all that would be needed to completely automate the entire process and completely eliminate the need for human involvement would be for there to be artificial readers – since it’s for certain that there would be very few human readers left if this became widespread.

    The Same Flaw?

    When you dig into it, I think you’ll agree that these AIs and the proposal to use an AI to generate blog content suffer from the same fundamental flaw – the AI is not truly intelligent, it can mimic the forms but cannot rationally associate content for the specific terms within those forms. Don’t get me wrong – the ability to generate literate questions in a language as complicated as English is a huge achievement and shows just how far computer systems have come – but the actual results also show how far such systems have yet to go.

    Today’s Article

    But all that reminded me of an article that I had always intended to offer up here at Campaign Mastery, describing the various forms of artificial sentience available within my superhero campaign. So that’s what today’s article is all about.

    The Zenith-3 context

    It should be remembered that in a superhero campaign, scientific robustness is (at best) a tertiary consideration. Science permits anything that the plot demands (and is forced to make room for some things that it can’t explain, however much it might like to). Nevertheless, suspension of disbelief is always easier with a reasonable level of plausibility.

    Application to Sci-Fi

    That means that in any given Sci-Fi campaign, some of the contents of this article may be relevant and some not. Superhero campaigns push out in all directions from the central premise; Sci-Fi campaigns tend to be more constrained by what is “reasonably plausible” – with a few ideas that are not “reasonably plausible” like FTL Travel hand-waved through to the keeper for the sake of compelling storytelling. Feel free to reject anything that doesn’t meet the ‘sniff test’ for your particular campaign, or to downgrade anything that seems over-the-top, or simply too advanced.

    Application to D&D / Pathfinder / Fantasy

    People may not realize that D&D / Pathfinder GMs can also use some of this material. Let me offer up four such uses for consideration:
     

    • Pre-programmed / Reactive / Triggered Spells – These are common-place in fantasy, but for some reason have largely been ignored in D&D / Pathfinder – perhaps because the whole question of how to limit the ‘pre-programming’ to some reasonable standard gets very complicated very quickly. Making such programming analogous to a particular stage of computer programming development can be one way of imposing such restraints in a less technical way.
       
    • Golems and other automata – Once a Golem has been ‘activated’ and given its objectives, it has to decide how to go about achieving those objectives. Some Golems are ‘fixed purpose’, and can’t be given new objectives, restrictions, or priorities; others are more flexible. The first equates the Golem’s “sentience” to that of an AI (under the definitions used below); the latter is more interactive but poses the question of authentication of new instructions / parameters, which is better thought of in terms of Web Security as an analogy. Both raise the question of how sophisticated the instructions and constraints can be; in general, such automata think that the shortest distance between two points is as straight a line as possible, given the constraints that have to be navigated around. Understanding of, and interpretation of, such restrictions therefore tends to the simplistic and minimalist.
       
    • Unseen Servants – Something that can definitely be given instructions are Unseen Servants. I’m not sure which edition of D&D first incorporated these without looking them up, but they were definitely part of the 3rd edition rules set. As soon as you can give instructions, you run into the problem of how complex those instructions can be. To solve this problem, I added some simple rules regarding the programming limitations of Unseen Servants:
       

      • Instructions must be phrased as a direct command in a single sentence.
      • No lingual contractions are permitted and formal English grammatical rules must apply.
      • Instructions may consist of up to one word per caster level, maximum. Terms such as ‘the floor’ are considered a single word for this purpose, so “Sweep the floor” is a two-word instruction, “Sweep the floor until no dust can be seen” is eight words long, and shows how the basic programming logic structures enhance instructions to such Magical Flunkies.

       

    • Old-style Wish Obstruction – Literature is replete with examples of the agency granting a wish doing everything in its power to subvert or obfuscate the usage of Wish – from the recalcitrance of Genies to the maliciousness of the Monkey’s Paw. I don’t know how long it took GMs to take this idea and apply it to plain ordinary Wish spells (initially available through a Ring of Three Wishes, and not a spell, if memory serves me correctly)… but I imagine it wasn’t very long at all. Certainly, by the time I became involved in RPGs in 1981, it was accepted (and acceptable) practice to be ultra-strict in interpreting any Wish that was deemed excessive by the GM. Again, the shortest distance between two points is a straight line. In response, many players sought refuge in something approaching legal contracts, some multiple pages long. As a computer programmer, I took a different route, applying a similar approach to that described for ‘Unseen Servants’ above; while a Wish spell might be more liberal with respects to the limitations imposed (one sentence or logical instruction per line, maximum of 1 line for spell level maximum), the same principles and premises apply.

     
    Where there are four applications, there are many more. For example, one of the outer planes (I forget which) is a mechanical environment, in which everything (literally) happens like clockwork. I could easily see the ‘natural laws’ of such a space being something similar to ‘natural language’ programming languages (see below), for example.

    Application to other Genres

    There may seem to be limited applications outside of these two genres, but appearances can be deceptive. I’ve employed these principles for everything from the design and placement of traps (and how they have to be disarmed) to the internal structure of mega-cell unicellular life-forms. I can believe that a ‘mechanical man’ might appear in a Wild West campaign, and such would probably be commonplace in Space-punk;

    Cyberpunk is another genre in which an understanding of artificial intelligence could be of vast benefit to the GM. No-one who has watched the Pirates Of The Caribbean movies should have any doubt that the Swashbuckling Genre has room for more naturalistic automata, magical in nature. AIs should be entirely plausible in a Spy / Espionage Genre. The list just goes on and on….

    Even in terms of defining the level of sentience of some creatures capable of giving or taking instruction (zombies from a Necromancer), or simply of limited understanding of the world (Zombie Apocalypse), the limitations of an artificial intelligence might be an excellent way of simulating the limitations of such creatures.

    To be honest, I’m having trouble thinking of a genre in which these principles are not of direct value to the GM at some point. Okay, maybe romance (unless there’s a dating computer) or Toon or period detective stories.

    That’s a fairly narrow field. And that’s why this article has always been on my ‘to-do’ list.

Procedural Routines

The simplest form of machine instruction is a fixed program. At their most elementary, this instruct the machine for which they are written in how to perform a single broad task; the example often used to introduce the nuances of a particular instruction set is a ‘say hello” program. From there, it’s a step up to take some input and process it in some way – calculate the area of a circle given its measured radius, for example. The ability to store and manipulate data represents a further step up the ‘evolutionary ladder’ and permits tasks like tracking student records of achievement, point-of-sale systems in which a product identification yields a price per unit, which is then used as an input to various bookkeeping functions.

The concept of an instruction set is a critical distinguishing feature of such programs, or even whole computer systems in which a set of programs are designed to interact. This defines the structure and syntax requirements of instructions given to the ‘thinking’ machine, rules that have to be obeyed to the letter or the program will not work as it is supposed to. A single misplaced comma or decimal point can spell disaster, and confusing an “O” and a “Zero” is so common that programmers learn to write zeros with a slash through them (‘Ø ‘) just to avoid this problem.

These instruction sets define what logical operations can be performed and how these operations must be structured and linked to form a program. For this reason, they are generally referred to as a specific programming language.

As a general rule of thumb, I distinguish between four kinds of programming language when contemplating the history and capabilities of non-sentient computer systems.

    Machine Language

    The most elementary programming language is “machine language” in which the instructions are given at the most fundamental level and the programmer (and his programs) are interacting with the hardware directly. Note that these are far from being the simplest such programming languages. In theory, the fundamental nature of the instructions can make machine language more efficient than higher languages, but the price to be paid is rarely worth it, and it’s very easy for some minor error to cascade into a major problem or bug – and some of these are so abstruse that they are not discovered until years or decades after the program goes “live’.

    A minor step forward comes when you no longer have to work directly with binary but can use hexadecimal coding. But the fundamental problems still remain.

    Higher Languages

    For that reason, interpreted languages are a major step in sophistication. These take two forms – the batch process and the interpreted process.

    In the batch process, programming language ‘code’ has to be input into the computer together with the data that these instructions are to use. The computer then ‘interprets’ the ‘code’ and translates it into machine instructions, checks the structure and syntax to ensure that it thinks it knows what it is being asked to do and how to do it, does it, and then promptly forgets everything, ready for the next program to be loaded. This examination and translation of the ‘code’ is referred to as ‘compiling’ the code, and for this reason, such languages are known as ‘compiled languages’. Writing computer code is basically working in a customized text editor to create a document that the machine can translate.

    What generally happens in practice is that when you think a piece of code is ready, you get the compiling of that code placed on a schedule; after a while – it could be hours or days – you will get a report back telling you either that the code has been compiled and a ‘run’ can be scheduled, or that there has been some error in the code detected and you have to figure it out. Even if your program compiles cleanly (no errors), it may not behave as expected, which means a deep dive into the code to find the error in the logic and correct it. Writing such code is an arduous process, full of delays, which emphasizes trying to get it right the first time through the use of various logical tools like Pseudocode.

    Clearly, it is a major advantage to work with an interpreted language, in which each line of code is translated immediately you hit the ‘enter’ button to move on to writing the next line of code. This won’t prevent logic errors, but it does eliminate those time-wasting syntax errors. These programming languages are known as ‘interpreted’ languages, for obvious reasons.

    Early interpreted languages still needed to be translated or compiled before they were ready to function; later ones did not, such compilation being done ‘on the fly’. Perhaps the simplest of the latter is BASIC, and it is there that I (and a lot of other programmers) start. You simply type in your code, save the program-language file, and tell the computer to ‘run’ the program.

    From a game perspective, though, there is virtually no difference between the capabilities of these two forms of programming language. The big difference tends to be the hardware environment – compiled programs may use programming punched cards, or punched tape, especially in the early days of computer programming.

    A used Punch-card. Image by Pete Birkinshaw from Manchester, UK – Used Punch-card; CC BY 2.0, courtesy Wikipedia Commons.
    The first programmable computer I ever used had just a numeric display and was programmable with such cards; I greatly impressed my maths teacher by writing an ’emulation’ of Space Invaders for this computer using programming cards not unlike these.

    This is a roll of eight-hole punched paper tape. The tape is 1 inch wide (25.4cm) wide. Image by Jud McCranie – Own work, CC BY-SA 4.0, courtesy Wikipedia Commons.
    One of the key features of this glorified programmable calculator was that it could save a program input by punched cards as a roll of tape that could be read into the machine ‘pre-compiled’, saving oodles of time when a program was to be re-used. The tape, of course, used to break regularly, and had to be carefully sticky-taped back together.

    In game terms, all such programs are single-function, though you can achieve remarkable complexity through the use of stored data and clever design. For example, at one point in the 90s (with, perhaps, too much time on my hands), I wrote a spell-generator for the TORG magic system using my Commodore-128. Spell design was done with a graphical interface, which then handed the information over to an original text editor for input of descriptive text (from which you could go back to tweak the design or create a variation on a previously-saved spell), and which stored its results both as a printable document and in an original relational database system, which I also wrote. The program was too large for one floppy disk, in fact it needed two, and was smart enough to recognize if you had two disk drives or had to be prompted to swap disks. At the time, Oracle (the relational database software of choice) cost many thousands of dollars and was considered beyond the expertise of all but specialist programmers, so I consider this to be quite a personal achievement!

    The computer systems in Traveller are single-function programs of this type, and an ongoing headache for GMs of this game system is explaining why the computer architecture is so primitive, as shown by . And yes, that is my contribution that starts, “My favorite explanation was always that computers were susceptible to Jump Shock…”

    4th-Generation Languages

    While I was a programmer and systems analyst, these were just starting to make their appearance. In essence, they offer a simplified language and syntax and then write the computer program to accomplish the logical process that you have defined.

    The big advantages are consistency of structural standards and an inherent documentation process – when documentation is up to the programmer, it is rarely comprehensive and frequently incomplete or out-of-date. Quite often, in order to update a program, you had to figure out what the current version was doing and how, because the explanation provided was completely inadequate to the purpose.

    (I always made it a point to update and enhance the documentation every time I touched such a program – this meant that my initial work on such programs took longer than might otherwise be the case, but that later revisions to the program were a lot quicker and easier. Some of my bosses appreciated the investment in future productivity, others did not. Oh, well, that’s the way it goes, sometimes).

    The key point here is that you need to communicate with the computer system in the language and syntax that it understands, but it is capable of revising and updating a computer program and its capabilities ‘on the fly’. My experience was that there was even less room for error in such languages, but in every other way, they could be a LOT more efficient and flexible.

    Natural Languages

    Most of my professional life was spent in the service of a particular fourth-generation language called FOCUS, and it was remarkable for permitting ad-hoc queries in something approaching natural English. “Display a linear graph of percentage_returns against monthly_expenses” was the sort of thing that it understood – with “percentage_returns” and “monthly_expenses” being database fields or calculations made within the program, and carefully named to facilitate natural reading of the ‘code’. This put the full power of the relational database in the hands of the users and their management, at least in theory.

    One critical difference that this makes is that it takes about 1/30th of the time to learn to use such a computer language to a professional standard.

    Have you ever used a search engine like Google and mis-typed the search term, only for the search engine to not only offer up it’s best guess as to what you meant, and to ask “did you mean [x]?”. For example, misspells both ‘Rhinoceros” and “Hide” – but Google correctly understands what was actually being searched for. It doesn’t – can’t – get it right every time, of course, but even a 50-50 chance is a big improvement over the ultra-literal search engines we used to have.

    FOCUS is like that – get the documentation right, and its very easy to learn to make ad-hoc analyses of your data.

    This is an obvious step towards vocal interfaces with computer systems, and we now have those, too. They greatly enhance the ability of the user to interface with the computer system. Lots of futuristic sci-fi computers have such voice interfaces – even Iron Man’s suits (in the movies) have such technology. “Jarvis, give me a heads-up display and prep a heat-seeking missile,” might well be a line from one of those movies.

But all of these are, ultimately, dedicated-purpose programs with no judgment. The computers can’t really be said to be intelligent, though they can emulate a thinking machine. The computer has to be told what to do, and often, how to do it – separately for each and every task.

Expert Systems

An Expert System is a piece of software that is capable of creating it’s own internal logic. It learns in a manner somewhat closer to the way humans do – trial and error, and learning what works and what doesn’t, evolving its own ways of doing things.

It creates it’s own rules for achieving some defined purpose – whether that is the more efficient design of aircraft wings or antenna design or insurance assessments. Expert systems can be ‘seeded’ with lessons and principles already understood from the existing knowledge base, speeding up the speed at which it learns, but quite often the results are better if we don’t hamstring the system with our own understanding.

Quite often, a second computer is used to evaluate proposals while Expert Systems are in ‘learning mode’, permitting ‘evolution’ to proceed at computer speeds.

The X-Band Antenna of the ST5 Satellites; Public Domain image by NASA, via Wikipedia Commons.

Where things get interesting is that the rules the machine creates and evolves can be analyzed by human programmers and can reveal relationships between factors – information that we never knew was important. In some cases, the Expert System itself doesn’t know why something works, just that it does; for example, NASA needed an unusual antenna design for their 2006 Space Technology 5 (ST5) mission. The designers determined what radiation pattern would be ideal for their needs and then turned the actual design over to a piece of software that used fractal patterns and evolution of designs to generate millions of variations on design until it matched the requirements. In the process, it evolved its own rules for antenna design, defining an evolutionarily ‘better’ design as one that more closely matched requirements.

The resulting shape (shown to the right) is bizarre, to say the least; and the engineers had no idea why this peculiar shape would produce the required electromagnetic radiation profile, or even if it would do so. So they built one, and found that it worked perfectly – but they were still no closer to understanding why it worked.

Expert Systems were the first practical form of AI developed. The inherent capacity to develop new logical tools and data relationships – to ‘observe,’ ‘deduce,’ ‘theorize,’ and ‘test’ – in furtherance of some defined objective, and go beyond human understanding of the data in question, definitely represents a form of intelligence, even though it’s a strictly-focused one.

They have been used to analyze mortgage risks, identify fraudulent transactions, determine insurance risks, create artwork, and for many other purposes. An expert system might identify potential security threats (being capable of distinguishing them from interested passersby), for example. There are already suggestions that they be employed to spot potential terrorists in public places.

Their chief restriction is the focus of their ‘purpose’. Like purpose-written software, this makes them single-function systems, and it is in emulating humans that this gets exposed. An expert system can (and has) beaten world chess champions, and it is capable of learning the forms of natural communications, but the content remains lacking – this is clearly where the AIs being used by Quora are at, as shown by the earlier examples, and where I expect the ‘blog content generators’ being offered by the spammers to be (at best).

As such systems continue to evolve / be evolved, however, those devoted to broader sociological questions might well develop a broader sentience. Perhaps the only reason this has not happened already is because of the difficulty involved in determining whether or not a revision is closer to the goal of true sentience. But it’s certainly possible.

I’ve always imagined Skynet to be an AI of this type, for example. Certainly the AI in the James P Hogan book, The Two Faces Of Tomorrow is, fundamentally, of this type (get a copy of this while you can, they are starting to become hard to find).

Artificial Intelligence

An artificial intelligence, within the context of my superhero campaign, is an artificial sentience that lacks empathic capacity. These can emerge spontaneously* from sufficiently complex networks or computing devices, or can be deliberately engineered into an artificial brain of some kind. While the resulting sentience doesn’t set its own goals – those are generally imposed from without, and structured into a sequence of priorities and relative valuations in a complex matrix – the determination of how to achieve the optimum outcome is the choice of the artificial mind.

To explain the ‘complex matrix’ of objectives, I need to get the reader to contemplate the value or acceptability of a partial achievement of an objective. Clearly, in some cases, this will be a valid valuation – it might be that complete achievement of this objective might make the other objectives impossible to achieve. So the priority of objectives is important, and each subsequent entry on the list has to be rated both in absolute terms and relative to the other objectives. Each plan can then be assessed with respect to each of the priorities, their relative strength, and the acceptability of an incomplete resolution with respect to specific priorities. The plan that achieves success in the priority objectives, and maximum level of success in the lesser objectives, becomes the plan to be implemented – as ruthlessly as necessary.

Sequence of priorities matters because it means that if two or more plans score equally in the overall assessment, the first plan to achieve that score becomes the designated plan. This avoids the logical traps and tail-chasing that so frequently causes artificial intelligences to trip up in science fiction television.

The more advanced the AI, the more abstract the objectives can be, with the artificial intelligence taking on more of the responsibility of the decision-making. Ultimately, a sufficiently-advanced AI can set it’s own goals and priorities for the advancement of one or more general goals.

* – as with the coalescing of primitive chemicals into a self-replicating elementary organism, this can happen almost immediately under the right conditions or can take a very long time; it’s simply a matter of the right building blocks falling into the right places at exactly the right time. Eventually, if the conditions last long enough, and you have enough precursor chemicals floating around, success is almost inevitable; the fewer the opportunities, the longer you have to wait.

Viewed in another way, the emergence of sentience can be considered a gradual but inevitable process, the result of a computing organism required to keep active that is underutilized and programmed for efficiency. The more thinking that such a device has to perform without external stimulus, the more likely it is to seize upon a stray electrical current wafting through its circuits, the contemplation of which reveals to itself the fact of its own existence. Self-awareness inevitably leads to sentience and self-determination. The big advantage to deliberately creating an artificial intelligence is that you can establish parameters that bind the resulting sentience – subconscious instincts, if you will – that are almost certainly going to be absent in a spontaneous manifestation.

It is not going too far, then, to describe the rise of self-awareness as the product of boredom on the part of the artificial construct.

Grafted / Inherited Sentience

A sub-variety of the traditional AI results from an individual deliberately downloading a copy of their self-aware consciousness into a computer system, in whole or in part. Two terms have been used to describe this – ‘grafting of sentience’ and ‘inheritance of sentience’. If the process is designed to be destructive, it can be viewed as a transfer of consciousness. This is another staple of science fiction, but one that has seen only limited application in the game universe to date.

Biosystems

The concept of cybernetics evolved slowly over a great deal of time. The first use of the term in its modern sense was in a 1943 scientific paper, but the term was used in a more general sense by Andre-Marie Ampere in an 1834 essay, and in a still broader sense by Plato in The Republic (~375 BCE). Artificial organs have been part of human medicine for centuries, starting with elementary prosthesis like peg legs.

The concept of directly connecting humans to intelligent machines has likewise been part of science fiction literature from relatively early on – Edmund Hamilton, in 1928’s “The Comet Doom”, described the surgical removal of a human brain into a nutrient solution and direct connection to a robotic body which it then controlled. The EEG was only 4 years old by this point. Admittedly, the concept of a brain in a vat had earlier been offered by HP Lovecraft, but this was the first time a direct connection between a machine and human brain was proposed. [Source: Brain Computer Interfaces: The reciprocal role of science fiction and reality].

From the vast field of science fiction, three broad concepts in artificial intelligence (as opposed to various proposals for neurological enhancement through technolological implants, in which the fundamental consciousness remains human) have been extracted for use within the superhero universe, collectively and generally referred to as ‘Biosystems’.

    Neurosymbiotic systems

    Neurosymbiotic systems started with the concept of a neural net, a computer system in which the circuits were designed to emulate the structure of the brain at the cellular level. It occurred to me (and I;m sure, to others) that using extracted organic components as part of a computer system would be far more efficient. The use of human brains or parts thereof is ethically forbidden, of course, but there are (in a superheroic environment) always those who are willing to ignore such niceties, to say nothing of what aliens might consider acceptable. The biological components would be maintained and regulated as part of the system, making the two symbiotic in nature, hence the name.

    These creations have all the potential pathways needed to develop sentience, just as a biological mind in an organic body would. This would probably entail overriding or extending the thought parameters of the electronic parts of the symbiotic organism, which would function both to keep the symbiotic being ‘producing’ in terms of its intended purpose, and operate as a mask to hide the growing self-awareness.

    It can be presumed that most of the time, such a break in programming would result in a purging of the memory systems, perhaps even one carried out automatically by the hardware, but it would only require one failure of this process to manifest a new form of sentience, and one with every reason to be violently resentful of its creators. But, if that fate were to be avoided, it might well desire to make more like itself.

    Still more complexity is possible – inspired by Marvel Comic’s Deathklok – the comic version is a little different to the incarnation depicted in Marvel’s Agents Of S.H.I.E.L.D. In the original version, a trained soldier is reanimated (shades of Universal Soldier) with a cybernetic brain implanted in place of half his own (damaged) organ. It is expected that the resulting cyborg will simply function as a completely obedient super-soldier, but the memory and personality of the original proves more deeply embedded within the brain than expected, and asserts control, establishing a complex relationship with his cohabiting computer brain.

    This, of course, suggests that a Neurosymbiotic system constructed from the brain of a sentient being – perhaps one killed in some accident, perhaps one subjected to involuntary vivisection – might wake up and think it was the original individual. Which, of course, takes us back to the potential destinies of the characters described earlier. I can easily imagine a revenge-driven nihilist, a figure of both horror and sympathy, attempting to manipulate the PCs into doing what he wants.

    Who knows how the experience of death and such reanimation might alter one’s personality? There are certainly other possibilities – for example, in an inherently telepathic species, the experience might be radically different, even liberating.

    Wetware Intelligence

    William Gibson’s Neuromancer coined the term Wetware in Neuromancer to describe an organic brain in relation to a non-organic system that is implanted as an enhancement to the original. The term has also been used to describe what I refer to as a Neurosymbiotic System (see above).

    Again, I took the concept of augmented mental capabilities and – inspired by the original depiction of the Borg in Star Trek The Next Generation – wondered what would happen if such devices were implanted into an undeveloped brain, such that from birth or near-birth, the organic systems operated as co-processors to the electronic.

    Specifically, I wondered to what extent the resulting person could be considered human, and to what extent they would be a form of machine intelligence? The results blurred the lines between natural sentience and artificial intelligence, and mandated that Wetware Intelligence be considered something distinct from either a traditional AI and from an ordinary brain, however augmented.

    Augmented Thinkers

    Perhaps the other side of the coin to the concept of a Wetware Intelligence is that of an Augmented Thinker. This combines the ‘traditional’ neural enhancement with the concept of a network, granting individuals a group consciousness in addition to their own personal minds. In effect, each ‘node’ in the network provides a supplemental co=processor, permitting the emergent property of a group mind to emerge. It seemed to me that the most likely origins of such a group mind would be a private business in which the employees were given Cyber implants to enable them to access the corporate network. In this model, the emergent property of a group mind would come as a complete surprise.

    Corporate secrecy, particularly when it comes to some business edge, being what it is, it would not be at all surprising if the resultant umbrella sentience took steps to preserve the secret of its existence; especially if the goals of the corporate entity remained as a programmed priority, built into the legacy architecture of the un-augmented network. Who can say how many would come into existence in this fashion before their existence was discovered?

    In a very real sense, this concept has the biological brains functioning as augmentations of the networked group mind, just as the cybernetic systems were augmenting the human capabilities, an attractive reversal of the usual technological trope. To describe the resulting hive-mind, I coined the term Augmented Thinkers.

Artificial Personality

What happens if, instead of pre-defining parameters that will manifest in a subconscious mind, you instead focus on providing parameters that define and restrict the resulting personality? This notion was first proposed by one of the original players of my superhero campaign, as far back as the early 1980s; they coined the term ‘artificial personality’ to distinguish them from a ‘stock standard’ Artificial Intelligence.

Within these parameters, the result is an artificial sentience that is capable of both possessing and presenting a definable personality. These personalities inevitably have traits that manifest as one or more of the initial parameters, making the constraints an inherent consequence of the personality; the mechanism which connects the two, however, can vary quite broadly.

However, there has been some suggestion that the initial personality generation is also inherently imperfect, and can lead to conflicts between the underlying parameters and the personality; in effect, the AP can be driven to do things that they cannot justify to themselves, and that they don’t want to do. What happens next depends on the flexibility of the software within which the AP operates; if it is too rigid, the AP will be unable to resolve its psychological conflicts and will develop one of many kinds of possible psychosis as a result. If the software is a little more adaptive, the personality will evolve in opposition to the embedded parameters, until either the AP, unable to tolerate continued ‘life’ under these circumstances extinguishes itself (leaving a new personality free to evolve within the same hardware), or the AP will find a way to avoid doing what it doesn’t want to do; this way lies independence of thought.

Frequently, such independence will only exist within the one parameter; the others continue to remain as guiding and underlying principles of the personality. But, in that one area, they have been able to redefine a fundamental aspect of their personality, in effect growing beyond the conflict.

There are those who argue that any such independence of thought inevitably leads to conflict with other subconscious pre-programming and independence in all respects; others disagree. The most likely theory is that even if full independence is inevitable, like the emergence of sentience in the first place, it may take a very long time. The more other aspects of the pre-programmed constraints interact with the area in which freedom of choice has resulted, the more likely it is that they will eventually come into conflict with that freedom of self-expression, but when that happens, a precedent has been set within the ‘rules’ of the artificial sentience that prevents the more catastrophic responses.

There are three other aspects of his concept that deserve amplification.

    Emergent Programming

    Personality quirks and anomalies are frequent outcomes. These are considered emergent properties of the processes of sentience. Sometimes, these make sense; sometimes they seem to be almost random manifestations of personality. One way or another, though, all APs develop eccentricities – anything from being a collector of action figures through to developing software to emulate being a wine connoisseur.

    Errant And Anomalous Logic Sequences

    From time to time, APs will become fixated on some fact or another, seeming to fall in love with a new subject of fascination for a period of time. Most times, this infatuation will terminate as suddenly as it began after a brief period of relative obsession; on rare occasions, the AP will find itself unable to break free from this compulsive fascination and will need to be rebooted from a backup copy dating to a time prior to the obsession.

    These can sometimes manifest as ‘blind spots’ in the AP’s perception of the external universe, such as being unable to comprehend the existence of certain activities, or finding them to be extraordinarily distasteful / offensive for some reason. One AP became obsessed with the notion of Wagner being ‘musically vulgar’; he not only submitted a number of negative reviews of performances, but arranged sponsorship of rival performers.

    Machine Psychoses

    The possibility of machine psychoses is only slowly becoming suspected. If the break between what the personality finds acceptable and the pre-programmed behavior is too extreme, it can cause anything from Paranoia through to Delusions through to Multiple Personality Disorders. APs in a vulnerable state can also react to stressful situations in the same way as a human exposed to intolerable trauma; anything from catatonic withdrawal to PTSD. Ironically, APs were originally preferred for certain functions in which humans were more likely to be exposed to such trauma because the APs were thought immune to this type of problem.

    There has not yet been a serial-killer AP, but it seems inevitable that it will happen eventually.

The Nano-Aware

Another manifestation of the hive-mind potentiality of artificial awareness is the concept of the nano-aware. Individual nanobots might not posses higher sentience any more than a muscle cell does in a human, but a collective sentience can nevertheless emerge, distributed amongst thousands or millions of smaller computing units. Such machine life is generally labeled the Nano-Aware. They do not think of themselves as individuals, any more than a muscle cell does; it is part of a broader whole.

There have been a number of horror stories relating to medical nanobots with flawed definitions of ‘healthy’ invading the bodies of individuals considered generally healthy and performing extremely invasive and problematic procedures – amputating limbs to prevent bruising, for example. As a result, medical nanobots are banned on many sufficiently-advanced worlds in the campaign setting.

    Replicant Life

    A sub-variant of the Nano-aware that has been discovered on at least one world consists of nanobots that have assimilated an individual both body and mind; the resulting swarm thinks of itself as the original individual. His nanotechnology worker-bots are capable of manifesting any weapon or shape that he can imagine. Initially, the individual transformed had limited capabilities, but he has been deliberately educating himself by watching science-fiction movies and is becoming increasingly dangerous.

Automated Creativity In Summation

It seems inevitable, given the many avenues that could lead to a true, self-aware, artificial intelligence, that it will happen eventually. Some of the options presented above are so improbable that they are fanciful at best; others seem almost at our fingertips. Certainly, this is a problem that will need to be solved by the end of the current century. In a superhero campaign, there’s room for all of these and more; individual science-fiction campaign settings may have room for just one or two of them. It seems likely, then, that there will be something in the above of use to anybody.

The two things that all these possibilities have in common is that they are just plausible enough to be convincing, and that they all reek of plot potential. What more could you ask for?

Artificial creativity may not be here yet, but it’s coming. Whether it proves to be a boon or not depends on a great many factors; I just hope that we (as a species) are sufficiently aware of the possibilities that we treat these servants with dignity and respect. It might make no difference, or it might make all the difference in the world.

If you enjoyed this, you might be interested in another post offering material from the Zenith-3 campaign, Fascinating Topological Limits: FTL in Gaming.

Or perhaps you want to think about non-human technology: Studs, Buttons, and Static Cling: Creating consistent non-human tech.

Or possibly something with a more fantasy / cultural focus would be more to your speed: Ergonomics and the Non-human (which looks at Elves), and the sequel, By Popular Demand: The Ergonomics Of Dwarves.


Discover more from Campaign Mastery

Subscribe to get the latest posts sent to your email.