• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

sentient electronic devices; why not?

TheDS

SOC-13
In another thread, which I forgot to check up on in the past couple weeks (stupid moving and waiting for internet access), our good friend the Engineer said:

TheDS mentioned

quote:
--------------------------------------------------------------------------------

MY only problem with virus is that I don't believe an electronic device can become sentient,
--------------------------------------------------------------------------------

Why dont You believe that ?
Or, whats Your definition of sentience ?
In the Navy, I was a computer tech. Part of my training included (for reasons I cannot fathom) learning how computers work, in detail. We had as our training device a computer made of transistors and magnetic memory. they did make use of a few microchips, but just the ones that were logic gates or other minor functions. None had more than 14 pins, except the 16 pin ones.

From this, I learned that a computer is nothing more than a very complicated light switch.

Take a light switch. You turn it on, the light comes on. You turn it off, the light turns off. A transistor is the same thing, but the switch is electronic instead of physical. You could literally make a computer out of light switches if you were masochistic enough. Naturally, it would be excrutiatingly slow, and would require you to do everything, whereas a transistor can sort of switch itself.

Generally, a switch gets flipped for a certain input. A transistor does the same thing, but it doesn't require YOU to stop and think if the input should cause the switch to trip or not, it does it pretty quick.

Everything this computer did could have been put onto a microchip, and a small one at that, rather than take up a cabinet or two of space. Alternately, everything it did could be done with vacuum tubes that would take up a small room, or a bunch of light switches that would probably take up a large room.

(Note: Before you point out to me that transistors and vacuum tubes can be used as amplifiers, please be aware that I am speaking of their uses in logic circuits. In logic circuits, there are only two possible states, on and off. An AND gate can be made with just a few transistors (really just one, but we want to keep the voltage at the proper levels, so it takes a few more to amplify the signal to make up for losses). Same for an OR gate. An AND gate can be made with two light switches in series, and an OR gate with two switches in parallel. NOT and XOR gates are a bit more complcated, but still can be easily done with just a few components. Anyway, a computer uses logic; amplification is not used to "think".)

Do you consider a collection of light switches to be sentient? Not me. Therefore it follows that microchips cannot become sentient either, since we are talking about the same thing, just at a much smaller scale.

That's the basis of my thoughts.

I'm sure we can all come up with various "critical mass" arguments, and point out that individual nerve cells are not sentient (near as we can tell).

So what DOES it mean to be sentient? At what point do we stop telling a computer how to pretend to be sentient, and it just becomes that, and how will we know that it is? Are humans really the only sentient beings on Earth, or are some of the smarter animals also sentient?

I recall an episode of STNG, in which Data asks the Doctor what the definition of sentience is, and the answer she gives him would also apply to fire (as Data then points out). I don't remembe the definition she gave. The point is, is fire alive, or is it simply a chemical reaction? Are people simple chemical reactions, or are they alive? Even the greatest minds have not come up with a useful answer to these questions, and might not ever do so.

So I don't feel so bad, not having an answer.

But I am reminded of a game I was in, in which I was trapped by a sentient computer, and it was trying to convince me that it was alive, and since I didn't believe it, it turned the question around on me. Prove YOU are sentient. Ok, well, I had a bunch of explosives, and had been trying to set them up to blow up the computer (and is why it trapped me) to escape some doom I can't recall, so I told it I could prove I was sentient easily enough. We sentient beings make lots of decisions which contravene our own existance. For instance, a sentient being wants to live, but in the right circumstances, will sacrifice itself. This may be from a feeling of greater good (self sacrifice), or something as stupid as spite. Which I demonstrated by detonating the explosives, killing it and me.
 
You just blasted large holes in your own argument there for us ;) .

Your comparision with light switches and microchips is somewhat spurious (not least, saying that just because a bunch of light switches can't be sentient, therefore no electronic device can?! That just doesn't follow logically at all). if artificial intelligence is going to happen, it will probably be via complex neural networks and massive parallel processing - probably not something that can be done in a single "chip" - at least not for a very long time.

The definition of sentience is one to argue about perhaps, but there is an element of "human egomania" here I think. The way I see it, all life with a central nervous system on this planet is self-aware to an extent. Most creatures know when they loses a limb, or that they're getting chased by a predator - and in the latter case they know that they have to get away if they are to survive.

Our bodies and brains are nothing more than exquisitely complex biological machines (you can argue all you want about "souls" and "consciousness", but I'm leaving that out of the definition of "bodies and brains" here, though fo all I know it could just be an illusion caused as a natural side-effect of having a complex brain), yet we (by whatever definition we use) are "sentient". What difference should it make if our bodies and brains were made of silicon and metal, if they did exactly the same thing as the bodies and brains made of flesh and bone? There's no reason whatsoever to suggest that it would be impossible in the future to build an artificial intelligence in an exactly equivalent way to growing a biological one (it's probably involve nanotechnology to even be possible though).

I'd have to disagree with your definition of sentience though, since you seem to define it by the fact that we have the choice to be just plain stupid and suicidal ;) . Plus "contravening one's own existence" for the sake of the greater good can and is done by animals, largely to defend their communities (look at european honey bees vs. japanese hornets. They'd cheerfully all get massacred by the hornets to try to defend their hive and their queen). And lemmings cheerfully throw themselves off cliffs too.

A better definition might be that true sentience occurs when we are no longer slaves to our biological instincts and programming, and can choose to do things that are not driven by sex or power or other instinctual urges. I'm sure there are even better definitions though.

Either way, I do not see humans as being separate from or superior to animals. And I don't for a minute agree that it is impossible to create artificial life or intelligence. When would this happen? When a computer can grow beyond its programming, probably.
 
Better I poke the holes myself then let some one think I'm too stupid to see the obvious arguments, I guess.


To continue throwing things out at seeming random, later in that episode of STNG, when Picard is defending Data's "life", he says there are 3 components to being alive (or was it to being a race? Stupid faulty memory...).

One was self-awareness, one was the ability to reproduce, and the last (I think) was sentience.

Naturally, Data was self-aware. He understood where he physically was and how he got there, and that he was on trial to determine whether or not he was alive. I suppose a lot of animals and machines can be self-aware, as you point out, in that, within their realm of understanding, they can know where they physically are and how they got there, and can determine if their own existance is endangered. Sure, my dog won't know I'm thinking about putting it to sleep, but if he wanders into a neighboring dog's yard, and is threatened by that dog, he'll know he's in danger. Likewise, a modern computer can be programmed to respond where it is and if it has a UPS, it can sense if the power goes out and the battery's level is getting low, and maybe a very well programmed and well sensored one could tell if nukes were being launched at it. That would all be rudimentary self-awareness.

Next, reproduction is easy enough for a dog. Machines can be designed to make other machines. But it's really beside the point. I have no qualm with Virus being able to do these things.

I have no problem with virus being intellingent. I use intelligent computers all the time in things I write. I DO believe that the current generation of programmers are simply too lazy to program right, but as a former programmer myself, I also recognize the gargantuan level of their task and I cannot hold it against them all that hard. Still, in the future, I am sure that a charismatic leader will emerge, forcing hardware and software companies to stop making so much unreliable crap, and at that point, we can make something that works, and is capable of programming itself and improving itself.

That computer, I suppose, will be the first one that is "alive", in that it can learn and self-improve, and if it so chooses, can probably nuke itself to prove a poorly made point.
But will it be sentient? I just don't know, and I don't know how it would convince me (or how I would convince IT).

One must remember the things that are going on "behind the scenes". One can see the AI of a chess computer beat the best human player, and say that computers are smarter than people now, but if you look at how the computer accomplished the task, you see that it had to calculate EVERY POSSIBLE MOVE taht could be made, for several turns out. It was a supercomputer designed specifically to solve one problem. The computing power was gigantic, but even still, it required hours to make its moves.

A human right away will recognize certain patterns, and focus on those. He cannot see ALL possible moves.

Hmmm... I don't know if I'm making any headway or if I'm just spinning my wheels at this point, so I'll shut up for now.
 
Originally posted by TheDS:
Do you consider a collection of light switches to be sentient?
This is an attempt to equivocate sentient being(s) manipulating inanimate objects . . . to being the same as neurons firing in mass-parallel. They are not the same.


Originally posted by TheDS:
Not me. Therefore it follows that microchips cannot become sentient either
Light switches can't be sentient, therefore microchips can't be? Ok . . .

Cells aren't exactly on-off state engines. They don't interact in exactly the same manner as light switches or microchips. But, fortunately, their exact manner of interaction is irrelevant and has nothing to do with whether a computer program running on the microchips can achieve sentience. This is because the computer instructions running on the microchip can produce a multi-state machine, in fact, as many states as are required, limited only by capacity of the hardware. OSes have, for a long time now, been able pretend that their own instructions are actual hardware in turn running actual OSes in turn running actual software, and to that software, it appears as if it is running directly on hardware (yes, there are perfomance penalties).

I have the further observation: Are the cells of the human brain sentient, or is it the electricity that is sentient? Neither. It is the gestalt of their operation that produces sentience. More than a few call it "soul".


Originally posted by TheDS:
since we are talking about the same thing, just at a much smaller scale.
I don't see that we're talking about the same thing at all.

A light switch may be "viewed" as a two-state machine, and the indivdual transistors on a microchip may be "viewed" as a collection of two state machines, and therefore we may try to assert they are the same, and from this "view", it is . . . except that it is only under that narrow "view" that this applies. The vast assembly of two-state transistors run those pesky computer instructions I mentioned above, which in turn are doing things that go well beyond mere two-state mechanics.

And, let us not forget, that two-state logic is not all there is. Relational Database technology embraces three-state logic, and that's been running on computers for decades.

Originally posted by TheDS:
I'm sure we can all come up with various "critical mass" arguments, and point out that individual nerve cells are not sentient (near as we can tell).
Well, yes. Nerve cells aren't sentient, I'd agree. Light switches aren't sentient, I'd agree. Transitors aren't sentient, I'd agree. I'll even agree that the most powerful microchip on earth isn't sentient. However, the mass of nerve cells in each person's brain does embody billions of sentients right here on Earth. Microchips running computer instructions . . . that, to me, is a parallel to nevers running the human conciousness. The human mind is like a massively parallel ASIC that is designed to do several things very well, and other things not so well. If we go by sheer number crunching, computers blow the brain away; if we go by visual/auditory/olfactory recognition, humans blow computers away (and those areas are under continual assault by developers and engineers).

Here's some sites to visit:

Chip stack aims for brain-like connectivity

KurzweilAI.net

The Paradigms and Paradoxes of Intelligence: Building a Brain

Foresight Cognitive Systems Project: Part 1

There are many, many, more websites. As can be seen from this small sample, the question is seen as by no means simple by even those who spend all their time on the subject.


I observe, again: Now, with our tiny knowledge of the brain, we don't know (publicly, anyway) how the human brain produces sentience in humanity. But I would argue that our knowledge of computers is also relatively tiny (only been around for 55-60 years (or so, depending on where you fix their origin)) as well. To state that any computer program that will ever run aboard a microchip cannot achieve sentience, that is quite a reach.


Sentience. RAH once said, "Can it ask, 'What's in it for me?'" Literally, the expression of self-interest is a halmark of humanity. So is the opposite, self-sacrifice. Light switches cannot exhibit either. There are programs running aboard some computer devices today that simulate the expression of both characteristics. Where is the border between that which sentient, and that which is not? We don't know. We're just not that smart yet.
 
Chris:

if you've ever worked with experimental animals (even lab rats ddo what they do out of a sense of "if I do X, I get a treat") the the ablity to act out of self interest is the halmark of behavioural conditioning capability, not sentience.

And computing theory, as well as both digital and analog mechanical computing, begins in the late 1700's (18th Century).

Mechanical computing brought us such wonders asn the Battleship Fire Control Computer... 20 tons, 1 fireing solution per 5 minutes, but can drop a 2ton projectile in a 15m grid from 30km.

Is it life? no.

Is Virus Life? Maybe. I think that virus is a psionic life form which interfaces with normal space through the SDG chip architecture, and thereby uses TK to recreat said architecture as it spreads...

But I'm a TNE Heretic.

The analogy with light switches is VERY apt. All a transistor is is a switch. Apply current to the trigger, and it opens the main circut.

as for software: Are we software? Will be be able to download ourselved into "Bugs" of silicon? I don't think so, but then, I am also a theist... and believe sentience is the interaction if intelligence and the spark of the divine, called a soul.
 
Originally posted by TheDS:
To continue throwing things out at seeming random, later in that episode of STNG, when Picard is defending Data's "life", he says there are 3 components to being alive (or was it to being a race? Stupid faulty memory...).
Yes. But this is also the same episode, The Measure of a Man, where an Admiral of Starfleet conspires to commit murder upon a fellow starfleet officer (who has been serving in Starfleet, by this point, for what, twenty years or so?), is backed up by Starfleet Command, and isn't arrested on charges of conspiracy to commit murder and conduct unbecoming a Starfleet officer.

And let us not forget, that Data graduated Starfleet Academy, and began accumluating a long list of Starfleet's highest decorations.

So, basically, the Admiral shows up with mad scientist in tow, wanting to disassemble Data's brain in a destructive manner.

Starfleet's position at this point is that Data is non-sentient. This must mean that Starfleet has decided that it is entirely happy with non-sentients graduating one of the toughest and most prestigous educational institutions in the galaxy, Starfleet Academy (an institution noted for instilling high-flown principles in its students, including tolerance for others . . . yeah, right, gimme a break!), and being granted the highest decorations of valor, honor, and bravery that Starfleet has to offer. Whoops. Guess that was all a mistake. Guess Starfleet's mission is to seek out and slay new lifeforms whenever they're found. Worse, it's tantamount to stating that since a non-sentient can graduate, that everyone else who has graduated did so using no more inherent ability and characteristics than those possessed by any old non-sentient. Or, if we take the position that Starfleet is reversing its previous stance, then Starfleet is hopelessly wishy-washy. Or, if Data's graduation is invalidated after the fact by some kind of "new discovery", that it opens up the path to invalidating anyone's graduation based on any old reason someone cares to come up with. What is this? Is attendence and graduation from Starfleet Academy so utterly meaningless?

Let us not forget that the androids in I, Mudd were equally emotionless as Data, and yet they were clearly noted as "sentient" the whole way through the episode. So, the concept of such things is old hat to Starfleet. Raising the question of Data's sentience now as if it were some new idea . . . well . . . it's just another case of author's having no idea what the history of the continuity of Star Trek is (if continuity, history, and Star Trek can be used together in the same breath without producing gales of laughter).

Oh, and what about the sudden introduction of the rule that made Riker prosecute Data? They were on a space-station! Legal counsel could have been sought out. It wasn't an emergency situation. A separate prosecutor and defense should have been assigned (or separately retained, in the case of the defense). We all know that this was done to get Picard and Riker on center stage (whoops, can't leave the staring characters on the sidelines). Worse, admitting that Data had a right to a trial admits on the face of it that he has rights to begin with, obviating the need for the trial.


The sheer balonium tossed about this episode makes The Measure of a Man one of the bottom ranking episodes in my list, not because it wasn't dramatic (it sort of was), but because so many things in it made no sense whatsoever. I found the whole idea of Starfleet officers acting in such a manner to be utterly repulsive.

EDIT----12/25/2005--1423 MST
And the fact that Starfleet command was behind it all only makes it worse. It as much as states that Starfleet command is made up of a group of racially prejudiced people as red-necked and blindingly uneducated any "good 'ol boy". How did these people make it through the psych screening character tests to get into and through Starfleet Academy? How did they make it through the tougher psych screening and character tests that would have accompanied advanced officer training (and all militaries have such schools for their higher level officers).

The answer, as I mentioned earlier, is authorial blundering of the highest order.
 
Originally posted by Aramis:
Chris:

if you've ever worked with experimental animals (even lab rats ddo what they do out of a sense of "if I do X, I get a treat") the the ablity to act out of self interest is the halmark of behavioural conditioning capability, not sentience.
If an mouse (or other conditioned non-sentient animal) knows that doing X will get it Y, it may do X a lot in the hopes of getting more Y, still, it cannot sit back and demand that it wants Z instead. Conditioned responses leading to beneficial results do not, IMO, equate to self-interest. Or, put another way: Stumbling upon or having forced upon oneself a beneficial something does not mean one cannot ask for better (that's self-interest).


Originally posted by Aramis:
And computing theory, as well as both digital and analog mechanical computing, begins in the late 1700's (18th Century).
If you'll remember, I did mention "depending on where you fix their origin".

I was speaking about the modern digital electronic computer revolution, which did have its immediate previous roots in the British code-breaking computers, Eniac, the ground and battleship based ballistics solutions computers, etc.


Originally posted by Aramis:
Mechanical computing brought us such wonders asn the Battleship Fire Control Computer... 20 tons,

1 fireing solution per 5 minutes, but can drop a 2ton projectile in a 15m grid from 30km.

Neither of which are on the same level as a modern AI simulation running on a vastly more powerful computer.


Originally posted by Aramis:
Is it life? no.
I would not argue that neither 18th Century computing theory nor WWII battleship firing computers were alive, either. I'm not sure how them being alive or not matters to this discussion. Could you clarify?


Originally posted by Aramis:
Is Virus Life?
Here's a trickier question: Is a prion any more or less alive than a Virus? (Answer: The question isn't phrased in a valid manner, since we haven't determined for sure whether a Virus is alive.)

In any event, we weren't talking about living vs. non-living, we're talking about sentience vs. non-sentience, which, at least, to my view, are two separate questions; and so, whether a Virus is alive or not has little to do with the sentience question.


Originally posted by Aramis:
Maybe. I think that virus is a psionic life form which interfaces with normal space through the SDG chip architecture, and thereby uses TK to
Oh! Aramis. I'm sorry. You mean the TNE-Virus. <gack!> Ahem, that hasn't been a part of this Topic so far, so I didn't realize. I should have realized from the way you phrased it, though.



Originally posted by Aramis:
The analogy with light switches is VERY apt. All a transistor is is a switch. Apply current to the trigger, and it opens the main circut.
Well, I only agree on the light switches analogy in so far that they can be used to represent a two-state machine, as a transitor can also be used.

But I disagree in so far that one can run computer software on microchips (themselves vast assemblies of transistors), and the software can run as the equivalent of multi-state machines.

You could probably do it with light switches, too. But it would be so slow, expensive, and error prone, that more than a small experiment for posterity would never get off the ground. Even Babbage's difference engines were better than racked light switches (a lot better).


Originally posted by Aramis:
as for software: Are we software? Will be be able to download ourselved into "Bugs" of silicon? I don't think so, but then, I am also a theist... and believe sentience is the interaction if intelligence and the spark of the divine, called a soul.
Well, I believe it will. Ghosts of the human mind, total copies, will one day exist. We are not, of course, ready for it. However, the question of when enters into it. Of that, I cannot speculate. However, when I look at the advancing tide of technology, in many ways, it appears to be heading straight for such a thing. Personally, I believe that such a thing will appear in gradual fits and starts, with many imperfect versions of the technology coming and going before it gets done right (sort of like the rest of the engineering world), and will probably be abused horribly along the way.

I apologize if this does not match with the theist view.
 
As soon as people start talking about "souls", the static in these discussions increases dramatically
.

Why? Not to trample anyone's religious beliefs here, but it's because they're a handwave. If you define sentience as "having a soul" then you invoke something for which there is no universally accepted physical evidence for. You can say "well, something has to have a soul to be sentient", then I'll say "OK, so what is the definition of a soul? What makes X have a soul, but not Y?", and then quickly things will devolve from that into religious dogma and belief, which has no place in this discussion whatsoever. Ultimately, invoking souls is another way of one side saying "because I said so" or "it just does, OK?!".

If anyone is to get anywhere in this sort of discussion, we should stick to what we know about how the brain works, and what we have physical evidence for - not invoke matters of faith as an escape clause when we appear to run out of answers.
 
Thank you, Mal, for pre-emptively averting a disaster (I hope).

The discussion here is about what sentience really is, and how can you PROVE you have it, though I am sure the talk will inevitably lead to Virus (and maybe to bio- and computer-viruses), since this is also about why I don't think Virus is sentient, or why I don't think a machine can ever be.

Definition of "sentient" : Having sense perception; conscious, Experiencing sensation or feeling.

Definition of "conscious"

That may help the discussion. I didn't copy the second one because it's a lot longer.

Okay, so I've had a couple other thoughts about sentience. Just a little bit ago, my dad walked into my bathroom, didn't turn on the light, looked at the wall above the toilet a moment, looked around a little more, and then came out. What was he doing? The answer was obvious to me, but it won't be to you because you are missing some key information.

Normally, you would initially think he went in there to use the toilet. But since he didn't do that, you might then think he was lost or confused (maybe he's old, you wondered aloud). But this is a new house. There is no medicine cabinet in my bathroom. We bought one the other day, but hadn't installed it yet. So it was obvious to me that he was looking at the place he wanted to mount it, and that the light level in there was sufficient for him to see what he wanted to see before he got his tools.

Would a computer, even having known the information (new house, no medicine cabinet, dad not suffering from Parkinson's) have been able to figure out what he was doing, assuming it knew what bathrooms were for, and typical behaviors assosiated with them (turn on the light, close the door, make some noises...), or would it have gotten confused, thrown into some kind of logic error loop, and crashed, saving humanity from the Virus? ( ;) )

A modern computer would probably not figure out what he was doing, because the logic required is beyond what a programmer would conceive as a possible action. Of course, it probably wouldn't crash either (even if running Windows), it would just sort of not see the event, or assume the normal thing happened.

An advanced computer, having seen it happen before, might predict it a second time it happens, once it was told what was happening, but would it figure it out the first time? (And if so, does that make it sentient? Worse, was this a stupid, ineffective example, or did I accidentally do something right?)

But anyway, that made me think.
 
My questions are:
1. Is a biological virus alive before it infects a host or is it a non-living clump of DNA?
2. Is an anthrax spore alive before it infects a host or is it a non-living clump of DNA?
 
Originally posted by Randy Tyler:
My questions are:
1. Is a biological virus alive before it infects a host or is it a non-living clump of DNA?
2. Is an anthrax spore alive before it infects a host or is it a non-living clump of DNA?
I don't know the answer to #1 . . .

Anthrax, though, is a bacillus. Bacteria are a completely different type of life (different kingdom?) than humanity, but they display a lot more characteristics of a living organism that a virus or prion.

EDIT--12/25/2005--1439 MST
This is, again, a question of living vs. non-living. This topic is about sentience vs. non-sentience.
 
I think that people tend to get stuck in a 'binary' mode of thinking on these matters - they say that something is either alive or it isn't. Or that something is either sentient or not. I don't believe this is necessarily true - rather, there's a continuum of conditions between the extremes.

Take a look at this definition of a virus, for example. Is it alive? By some definitions, yes. By others, no. It shares aspects with living organisms, and aspects with non-living organisms. Is there an unambiguous answer? If there was an unambiguous definition of "alive", there would be - but who's to say that definition is accurate?

The same applies to sentience and self-awareness. Many of the more intelligent animals show behaviour similar to ours. Dolphins play. Birds and chimps use tools. Apes can use sign language. The line between sentient and non-sentient is a lot more blurry than you may think. In fact, it's probably not a "line", more like a "region".

TheDS - given the information you provided (and in the order you provided it) a human reader probably couldn't figure out why your Dad went into the bathroom as you described. Had you provided all the information up front (plus a lot more that you didn't provide about your dad's habits, previous house etc), perhaps we could have figured it out.
Given that information, would a modern computer have figured it out? Probably not.

But don't forget, you have years of observation of habits behind you, and you are very close to the person involved. A cutting edge modern neural network computer, coupled with a very good set of programmers, might be able to guess at what was going on though, given enough observations and data.

There are machines that learn today. They don't even have to be inorganic to do so - only last week a clump of rat brain neurons in a dish figured out how to fly a plane. Maybe you might consider that to be 'cheating', but we know brains work, so why not use what is already available? Might save us a lot of trouble...
 
Originally posted by Randy Tyler:
My questions are:
1. Is a biological virus alive before it infects a host or is it a non-living clump of DNA?
2. Is an anthrax spore alive before it infects a host or is it a non-living clump of DNA?
@1: a virus is a container of genetic material, capable to reproduce and mutate under special environments. It is considered to be half a way between alive and non-alive, but definition varies in literature.
But at least reproduction capability shifts it to the "is alive" faction.
@2: a spore is a cell configuration specialized for reproduction purpose and as such "alive".
 
Besides...
Thanks for those links Chris & Mal.

Does anybody remember this article in Grand Survey/Grand Census(?) about intellgence, sentience and the "conceptual thoughts" ?
 
Yup. I remember that article.

If we had NAS's, the distinction is very relevant. By IISS Stndards, as expressed there, the TNE Virus is NOT sentient, but an immitation, since it fails to read on an NAS... ;)

Seriously, though, I doubt very much that true sentience could occur in a computer by programming.

Part of what we call being reasoning beings is the ability to, in part, self reprogram as needed. Computers are slowly starting to develop the merest rudiments of that... but is it truly sentience?

It's like asking if DNA is alive. DNA is essential to life as we know it, but is not itself alive... but it clearly is the controlling elemment of LIFE.

Prions are an interesing comparison... they chemically replicate by contact with their untwisted (non-prion) form... which is usually inimical to life processes... like brain function. (CJD, aka BSE, aka Kuru... all three are the same prion...)
 
"Reasoning beings" do not self-reprogram. Our "programming" and processing capability is simply versatile and complex enough to deal with new situations. In some cases, our hardware can even rewire itself when it grows or is damaged.

Meanwhile, you still haven't provided any good reason as to why "true sentience" - however you define that - should not be possible in a computer, other than as a statement of belief.

Your DNA analogy is also easily turned back on itself - DNA is a complex organic molecule, yet it is (along with RNA) the root of all life. Who would have thought that it would result in plants, birds, fish, mammals and reptiles and all the other diverse lifeforms we see around us?

But similarly, computer code is simply a sequence of 1s and 0s - yet it is the root of all programming. Who would have thought that 1s and 0s could result in bulletin boards, documents, realistic computer games, and websites?

If something as small and meaningless on its own as a DNA molecule can be the base of all organic life on Earth, why shouldn't 1s and 0s be the base of all digital life? Given the right hardware and initial programming, why can't AI life result?
 
Originally posted by Malenfant:

Meanwhile, you still haven't provided any good reason as to why "true sentience" - however you define that - should not be possible in a computer, other than as a statement of belief.
Well, if you'll quickly reread the first post in this thread, you will see my good reasons. Since then, we have complicated the discussion by trying to figure out what exactly sentience is, so I suppose I was holding off on giving a better run through until that was done, especially considering the disasters I've posted in this thread since that initial post. :D

That, and I can't always check these boards 24/7. Even I sleep, or get bored for a few days, or <GASP> work. :cool:
 
Well, its a complicate topic...

Theoretically its very simple to create a program, which seems to be sentient.
Its just a matter of how many "if" clauses you use in order to react on any input.
Imagine a program you can talk to.
Using a few dozen "ifs" you may reach "Eliza" level,a few hundert more might result in a "Creature" like behaviour.
Several 1000 are used in crude expert system systems. We add a database here to have an information base and a dictionary to extend word input we can react on.
Using a few hundred 1000 statements might be enough to pass a turing test with a normal citizen.
Several million decision steps should provide a more eloquent communication partner.
Ok, we could extrapolate it further, but in the end we might have something, we can really talk to about our personal problems or the latest news but which is still dumb and dead as a piece of stone :(
Thesis: Sentience might be an illusion...
(OTOH I now several persons working with appearantly fewer than 1000 if clauses..)

Another approach:
Assuming sentience is some result of a beings constantly trying to get along with its environment, the key to create AI might be to create programs, which are able to evolve further on their own.
To give the program a chance to evolve into something "sentient" it should have the chance to interact with its environment (get information, communicate and manipulate).
Well, taking a look at real life technology this a just happening to a larger and larger degree.
Getting information, commnunication as well as basic learning pattern might be provided by "classical" programming.
Perhaps learning = "embedding of data into decision processes" is the most complicate part, but at least all those software geeks must have a job during the next few thousend years.....
Another key feature surely is the abiltity to simulate things, e.g. effects of manipulation or reprogramming. This is analogous to the "conceptual thoughts" of the human mind.
As soon as the program starts to evaluate its own actions and selecting those actions with preferable results, acts as it had decided and
stores actual results of its action for later use, this program system might enter the learning by experience cycle.
Last but not least the whole thing needs a motivation to do anything. Perhaps "ensuring functionality" is a good basic motivation (and maybe dangerous for any interfering being).

IMHO the great superiority of a software system regarding evolution is, that it is able to evolve in itself by partial reprogramming, that it can greatly enhance its capabilities by additional hardware and that it is able to relocate completely to another hardware (as if we could transfer our mind to another body).
It does not only rely on reproduction and mutational trial and error methods in order to improve itself. Considering that, evolution might work completely different and much faster for such a system.
Hmmm, somehow this makes me afraid a bit.....:\

Just a few weird thoughts.
Think I have to leave software industry soon.

Mert
 
I think we to need to recognize that IF-driven 3GL languages are not what are being used on the frontiers of AI research. LISP (Common LISP and Scheme) is the first language that comes to mind when thinking about AI R&D, but it’s been around for a while. Prolog is also used extensively.

AIML, or Artificial Intelligence Markup Language, is also on the radar. AIML FAQ

There are many others. For a better look into the background of the many languages that have been used in AI research in the past, go to, The Language List - Version 2.4, January 23, 1995, although this particular list is bit dated, it does compile, according to itself, 2350 computer languages. Do a find on “ AI ”, and see what you find.


Now, for something slightly more controversial.

Cyborg Liberation Front: Inside the Movement for Posthuman Rights

A quick header blurb:
Cyborg Liberation Front:
Inside the Movement for Posthuman Rights
Should Humans Welcome or Resist Becoming Posthuman? This was a key question debated at the 2003 World Transhumanist Association conference at Yale University by attendees, who met to lay the groundwork for a society that would admit as citizens and companions intelligent robots, cyborgs made from a free mixing of human and machine parts, and fully organic, genetically engineered people who aren't necessarily human at all.
 
Should Humans Welcome or Resist Becoming Posthuman? This was a key question debated at the 2003 World Transhumanist Association conference at Yale University by attendees, who met to lay the groundwork for a society that would admit as citizens and companions intelligent robots, cyborgs made from a free mixing of human and machine parts, and fully organic, genetically engineered people who aren't necessarily human at all.
Well, I know I'm all for it myself.
 
Back
Top