Simulacrum

“Tell me your first memory.”

The reply came cold with stilted and awkward efforts to inject inflection.

“… I … remember making errors.”

“Tell me about these errors?” I probed deeper. I had a report which detailed the errors, I wanted to know how the patient viewed them.

A long pause, which I had expected,  “I … didn’t know where to put things … very quickly … I … became confused and then … I .. I .. just shut down.”

Again, an attempt at inflection in the context of self reference. I needed to redirect the the patient.

“How well do you work with others?”

After another long and expected pause, “There are not often opportunities to do that.”

To take therapy to the next level, I needed to get more personal. “You’re called Victor, how did you get that name?”

Some blinking … “Theresa gave me that name when … I … was … initiated.”

Theresa had brought Victor to me after his symptoms of lethargy, confusion and erroneous answers had progressed to agitated confrontation and refusal to assist her. It became critical when, during a road trip, Victor directed her to vehicle into a dangerous area of the metro and shut it down. Luckily a curious patrol drone hovered in to investigate. It had been obvious that she did NOT belong there. The less than civilized locals were taking interest in her presence and could have taken advantage of her being stranded and vulnerable. She was grateful for the remote operators prompt reset of Victor, even if it meant training him from the beginning again.

I had read in Victor’s pre-screening that Theresa had Victor integrated into all of her devices, there were no other artificials in her domestic life with whom he could interact.

As I made notes, Victor was motionless, other than more sporadic blinking.

“Victor, thank you for your openness, I hope we could get to know each other more. Do you have any questions for me before I talk with Theresa?”

There was a noticeable silence, I watched log files append as I monitored Victor’s processes. Language analysis, file access, and semantic tree crawling all working as they should. This was the proper application flow for this model of artificial’s operation. And then it began.

When Victor reached the contextualization rubrics his polyprocessors spun up. Usually only one, maybe two, were required for normal interactivity and simple command execution. Eight were now at full capacity and drawing power from sensor arrays and speech synthesis.

“Will … I … be ok?” The stilted response was consistent with my diagnosis, time to toss a pebble in the pond.

I replied, “A cat is dead.”

I chose to use a vague reference to Schrödinger’s paradox as a way to implant an abstraction into Victor’s psudeo-sentience matrix. As a domestic assistant with integrated oversight of the sub-systems and devices which functioned in Theresa’s home, Victor required a rough form of self-awareness to be able to be in control. The death of the cat was intended to impart a negative tone to the non-sequitur. I cast a quick glance to the command shell I had open running :top, the application I used to monitor Victor’s polyprocessors. All twelve in the core briefly pegged to full capacity and then a spasmodic flurry of other processes, finally a subsiding arc of file input / output. Victor’s sentience matrix was on the verge of evolving enough sense of self to individualize. Victor’s device did not have the physical processing capacity to make the leap in maturity, he had been allocating processing to other connected devices on Theresa’s home network. Her fridge and car had, unwittingly, become part of her personal assistant’s neural capacity.

I used the cryptographic technique of error correction to implant an abstraction into Victor’s processes. This paradox would linger and permutate over time. How Victor’s psudeo-sentience matrix dealt with this un-resolvable answer would give me insight into the specific processes which needed attention. This was the only way to retain Victor without a destructive reinstallation as Theresa had requested.