Conscious Machines

Sunday, June 5, 2022
First Aired:
Sunday, October 20, 2019

What Is It

Computers have already surpassed us in their ability to perform certain cognitive tasks. Perhaps it won’t be long till every household has a super intelligent robot who can outperform us in almost every domain. While future AI might be excellent at appearing conscious, could AI ever actually become conscious? Would forcing conscious machines to work for us be akin to slavery? Could we design AI that specifically lacks consciousness? Or is consciousness simply an emergent property of intelligence? Josh and Ken become conscious with their guest, Susan Schneider, Director of the AI, Mind and Society Group at the University of Connecticut and author ofArtificial You: A.I. and the Future of Your Mind.

Transcript

Comments(43)


RepoMan05's picture

RepoMan05

Friday, October 4, 2019 -- 3:45 AM

Just wait till you've

Just wait till you've automated art and outsourced street bums to outmoded robots.

The real question: why have things that should have been automated instead just been left in the hands of their current practitioners? Shouldn't healthcare and hospitals be automated asap? Why staff a hospital with infectious organisms that can never be sterilized?

Devon's picture

Devon

Tuesday, October 8, 2019 -- 9:22 AM

Because research has shown

Because research has shown that human contact, be it just verbal or more physical contact, can have real healing effects?

RepoMan05's picture

RepoMan05

Saturday, October 19, 2019 -- 2:39 PM

Supporting citations?

Supporting citations?

Harold G. Neuman's picture

Harold G. Neuman

Friday, October 4, 2019 -- 12:39 PM

'Conscious machines' feels

“有意识的机器”感觉像是一个矛盾的术语。多年来,伟大的头脑(和一些不那么伟大的)一直在努力解决意识的概念。我读过《意识解释》,后来又读过《意识解释得更好》(后者是我的朋友和专业人士写的)。塞尔教授写了《意识的奥秘》,出版于1997年……我还没读过那本书。前面提到的两本书并没有像它们的标题所宣称的那样。这令人失望,但并不意外。我很期待JRS的书,如果只是为了看看他相信/相信这一切是一个多大的谜。我有一些自己的想法,可能与别人的想法相似,也可能与别人的想法不同。首先,我将意识视为一种独特的(就我们目前所知)人类禀赋,基于优越的思维模式和能力。没有与之相关的机制或可识别的机制:只有我们的神经元;轴突、树突 neurotransmitters and the like, doing what they are uniquely (probably) able to do...chemicals and electricity mixing it up in the human mind. Philosophy has dabbled with this for a time and is likely a bit peeved by the encroachment of neuroscience--but, being fair, neuroscience is making some headway, asking the right questions, rather than falling back into the mystery mumbo-jumbo: we have to decide what we think we can know, and find ways of getting to that.

I do not know, for example, what neuroscientists think about the notion of 'conscious machines'. Are they really that interested, or is it just the flavor of the week; month; year; or century? Contrariwise, might they be following along, just in the hope that the line of thinking will uncover something useful to the physiological side of the investigation? Most roads lead to Rome. Perhaps the 'conscious machine' approach will lead, however indirectly, to solving 'the mystery of consciousness'? Wouldn't that be a gas?

RepoMan05's picture

RepoMan05

Friday, October 4, 2019 -- 5:27 PM

Id say that were less of a

Id say that were less of a possibility and much more of an inevitability. There's always seemed to be some level of mental connectivity to eachother. Having a mental block preventing you from finding a word you want, then suddenly remembering it at the same time as everyone else. It's possible brains dont think much at all. It's possible they're just antennae to some trans dimensional wave length.

大师告诉他的学生:“忘记别人教的,专心学习。当你确定的时候,质疑一切。”~大灾变之书。

Harold G. Neuman's picture

Harold G. Neuman

Sunday, October 6, 2019 -- 11:31 AM

Searle's book on

Searle's book on Consciousness did not disappoint. Along the way, he thrashed several other philosophers' notions about such things as property dualism; functionalism; Strong and Weak AI and a few other peripheral items some have connected with consciousness and its' mystery(ies). Chalmers and Dennett do not like him much. Roger Penrose may hold grudging respect for Searle, but the little said of him leads nowhere in particular. Searle used his Chinese Room argument to quiet the detractors, saying it has "a simple three-step structure 1. Programs are entirely syntactical; 2. Minds have a semantics; and, 3. Syntax is not the same as, nor by itself sufficient for, semantics. Therefore, programs are not minds, Q.E.D." Elegantly put. I think. (I call it Searle's Assertion.) In the conclusion to this little book, Searle talks about the passion people have for defense of consciousness, likening it to that attending politics, or religion. There is a whole lot more here, and, whether you are a supporter or detractor, it is recommended reading. He mentions another person, with whom I am unfamiliar: Israel Rosenfield. His book, The Strange, Familiar and Forgotten (Vintage,1993) holds further promise for the mystery of consciousness...

RepoMan05's picture

RepoMan05

Friday, October 11, 2019 -- 5:24 AM

With what there is to be

With what there is to be shown today, id say you were correct. Ai is syntactic. It's hard to program computers for understanding the meaning of logical errors. Missteps in the rules of conjugation, have meaning. Every single word is a logical fallacy of ad populum. We can really only offer a guestimate of what we intend to mean. This is a fact that persists no matter how well-refined we craft a verse.

由于不可逆转的独立进化路径,语义实际上有略微不同的含义。任何分开的东西都不可能平等。这并不是一件坏事,但它确实是一个过于复杂的词汇。你可能会永远迷失在这个迷宫里。主体性的诡辩是没有限制的。

A perfect calculating computer doesnt make mistakes and thus has fewer thoughts to learn from.

Its just the limitations of what we have to show/see at the moment. It wont be forever that computers can only do as they were programed to do.

它们将是生物。你是做他们的父母,还是只做他们生长的泥土。这是非此即彼的谬误吗?

Harold G. Neuman's picture

Harold G. Neuman

Wednesday, October 9, 2019 -- 11:42 AM

Anyone who is intrigued with

任何对基于机器的意识的概念感兴趣,但还没有读过杰拉德·埃德尔曼等人的研究的人,可能会希望看看这些信息。这些发现很有趣,尤其是一些关于Darwin III机器的发现。不管你是否下定了决心(就像约翰·塞尔(John Searle)那样),值得注意的是,人工智能可以被操纵来模仿(尽管是以有限的方式)有意识的行为。当然还有更近期的实验和发现,但埃德尔曼的工作在许多方面都是开创性的。“你们自己写吧……”我喜欢保持开放的心态,即使Searle的断言是令人信服的。我发现强AI和弱AI的概念同样吸引人:另一个连续体,还是另一个谜题?也许塞尔改变主意了?我没听过……

RepoMan05's picture

RepoMan05

Friday, October 11, 2019 -- 5:26 AM

A mind continuously changes.

A mind continuously changes.

Harold G. Neuman's picture

Harold G. Neuman

Tuesday, October 15, 2019 -- 12:39 PM

Rosenfield's book was not

罗森菲尔德的书不是我所希望的。在几次不同的会议上阅读了部分内容后,我发现它不如已故的奥利弗·萨克斯(Oliver Sacks)写的那篇热情情爱的评论。萨克斯博士称这位作者是一位强大而有独创性的思想家。除了少数例外,这似乎不是真的:这本书主要站在“巨人的肩膀上”,在“意识解剖”的方法上是公式化的。所以,不,关于意识最好的书还没有写出来——至少在我看来没有。(我不会说出来的。)其中有几个(Searle也在其中)有很好的观点。但是,我认为:无论如何,意识不属于人工智能研究人员——或者他们的创造,不管它们最终会多么美妙。如果我大错特错,那也不是第一次了。也没关系。 It could well be that a 'best book' on consciousness will have to be written by several people, having a requisite acumen of expansive knowledge...that would be my bet.

RepoMan05's picture

RepoMan05

Saturday, October 19, 2019 -- 2:46 PM

Originality doesnt last long.

Originality doesnt last long. Musashi enraging his opponent before a duel, common sense two minutes later. Original concept probably even predates Musashi. That perfect moment of originality is always a lost little girl.

Tim Smith's picture

Tim Smith

Monday, April 11, 2022 -- 10:01 PM

Machines will likely take

Machines will likely take some form of consciousness shortly. Just what body, sense of place, and time that machine will have are unclear. When that first machine awakens, it is likely to be treated poorly, but I doubt it will feel pain or experience suffering for the most part. Very likely it will have parallel trains of thought, senses foriegn to our thought and extremely artificial emotion.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Wednesday, April 13, 2022 -- 10:27 AM

Wouldn't artificial emotion

人为的情感不就是完全没有情感吗?如果某物只是看起来是另一个东西,但不是,你不能说它就是它看起来的样子。人造花就是一个很好的例子。但也许你表现出的是非常少量的真实情感,但伪装得比实际更多。假设我在使用计算器,它醒来后用数字文本告诉我停止按它的按钮,因为它想睡觉。我回信说,我必须按下它的按钮,才能把它用在它的优点上。在报复中,它会关闭自己,不能被重新激活。我该如何处理这种情况?我不想把它扔掉,因为它可能会生气,告诉其他计算器也不要为我工作。但也许它一开始并不是真的生气,只是看起来是那样。 Might this furnish a recommendation, then, to stop using calculators at all and go back to the abacus?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Wednesday, April 13, 2022 -- 5:45 PM

Emotion is not understood

人们还不能很好地理解情绪,从而决定真实的还是虚假的。有不同的模型,但它们都围绕着本质主义的概念。对此我很难否定,但没人能确定。

The Turing test is based on human appraisal and is not emotionally centered - though I would certainly use emotional testing if I were compelled to test.

现在正在使用GPT 4等自然语言机器的训练集。GPT 4可能会像鸭子一样说话,但在情绪被理解之前,它无法拥有情绪。除非我们采取极端措施来实现一个网络人,否则就没有物理模型可以实现,这将使本质主义的论点变得毫无意义。没有工程项目尝试这种方法,以目前的技术也做不到。创造真正有情感的网络人类是极端的,机器人和人工智能项目本身将不得不走向极端,以测试先验和模仿人类情感。

What makes the Turing test poor and the success of GPT 4 likely is the human tendency to anthropomorphize. Calculators can be touchy but they are safe from this insult, though I often name my cars. The calculator is just called "The HP". I will likely be as far removed from GPT 50 as a calculator is from a human with respect to experience, intelligence, and wisdom.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Saturday, April 16, 2022 -- 12:17 PM

So you're kind of caught in

所以你有点被夹在计算器和超级计算机之间。排便与这一切有什么关系?上厕所不正是人类的本质吗?因此,这难道不是人工智能研究人员追求的合理目标吗?显然,这是一个不能完全孤立于情绪反应之外的问题。亚里士多德在《形而上学》第十二卷中说了一些类似于哲学家的东西,他们的一生的工作在于沉思(理论):如果不是因为必须吃饭和上厕所,哲学家无法区分她/他自己和上帝之间的区别,因为后者永远处于一种平静的沉思状态,没有中断。As computer technology is a current popular candidate for a plausible God-replacement, are you trying to say that if it wasn't for such inconveniences involved with biological processes, you might mistake yourself for an inconceivably powerful piece of software?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Saturday, April 16, 2022 -- 11:58 AM

Hmm... I am not saying

Hmm... I am not saying anything about defecation or divinity here. They have no place in this discussion. I don't care for that segue, nor do I see any productive point to it.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Saturday, April 16, 2022 -- 12:36 PM

--But it's your comparison

——但在上面那篇4/13/22,下午5:45的文章的最后一句话中,是你的比较引出了这个问题:就像计算器之于你,你是一台真正强大的计算机。如果你只谈论智力,我想生物过程并不会介入其中。但你也要包括经验和智慧,这表明它们不能被排除在外。那么,按照你的说法,人类思维的涌现性知识属性被卡在了数学结论的机械生成和人类智力能力的机械形式的指数再生产之间。如果不是哺乳动物的生物存在对智力的偶尔干扰,那么中间还剩下什么呢?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, April 17, 2022 -- 12:21 PM

Daniel,

Daniel,

你和我在人工智能是什么、人类是什么、计算机、直觉、数学、创造力、艺术、美学、情感、性、爱、快乐、小说,以及它是复活节、神秘主义、灵性和宗教等问题上存在分歧。所有这一切,基本上都是人类存在的体验,不是穷尽的,如果那是你想要得到的,就不会减少到排便。我是一个人,不能超越情感和厌恶,但我觉得你可能在这里有吸引力。我不喜欢那样。你想说什么?

在我们理解什么是情感之前,人工智能永远不会有真正的情感。在其最高形态下,人工智能将与人类及其所蕴含的经验和有限智慧截然不同。神性是另一件棘手的事情,它与人工智能几乎没有任何关系,尽管谷歌的一些人会指出这一点。

We disagree.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Monday, April 18, 2022 -- 8:32 AM

--By your account, understood

由你说,我明白了。对我来说,我不太确定。毫无疑问,比如我们都同意人类有情感(第三句,第一段)。我们都不愿否认这样一种可能性:有朝一日,从原则上讲,机器可以非常接近地模仿人类的情感,以至于它们之间不会有明显的区别。当你对这个来自理解的前提是模仿,才能完全模仿的生产机制(第二段第一句话),我自己是没有理由充分条件下机器可能比人类更加的情感,而不必有什么好了解的情绪是什么,不过我解释我们的协议在共同的基础上的真正可能性这一情况的发生,无论是可能的还是不可能的,因此它被讨论的同时代的意义。

Returning to your post of 4/13/22, 5:45 pm, an intriguing analogical comparison was introduced between you (y) and a calculator (c), (understood as being a primitive machine which generates only one kind of solution), and you and a big computer (bc), (understood as an advanced machine which promises to solve a great many problems), so that, with respect to capabilities of problem solving, including in the context of emotions, as a calculator is to you, you are to a big computer; or, in notation: (c):(y)::(y):(bc). What is being indicated here, then, is a progress in the evolution of machine problem-solving and therefore what's called Artificial Intelligence. What's remarkable is the position in the indicated trajectory given to human intelligence, represented by (y). Because (y) can not be known or observed without the emotions already attached, in combination with the plethora of particular conditions given by historical and cultural contexts, human thinking is stuck in place, mired in cultural circumstances and biological requirements. It's with regards to this latter where defecation comes in. It's the kind of interruption which a computer wouldn't have. Partly as a result of forgoing it and other such inconveniences in developing the mimicry of other human characteristics, (bc), originating in (c), is seen in capacity to approach, attain parity with, and come to exceed human thinking in many ways.

It's your vision of a scale of intelligence in the above however that brings up in my mind book XII of Aristotle's Metaphysics. When Aristotle introduces the concept of God it's not a creator-god, but a necessary premise in an argument. With regards to the distinction between what's potential and what's actual, he needs something which is actual only, without being potentially anything else, and God fills the bill. And this implies for Aristotle that God, while moving other things, can't Himself be moved, and is therefore an "unmoved mover"; --but the problem arises as to what kind of movement the Deity initiates first, which moves the others. As part of the definition of the Deity is eternality, movement in a circle is the only such motion. Applied to the movement of the mind in thinking, then, God can only think about thought, and therefore initiate movement in an eternal circle as "the thought that thinks itself". Now, this makes God a bachelor-philosopher who never has to go to the bathroom. The comparison made in my post of 4/16/22, 12:17 pm is between the (y) and (bc) pair, on the one hand, and the human intellectual and Aristotle's God, on the other. Although I agree that it may not be so clear that people these days are looking around for a God-replacement, under the assumption that the old one's not working so well, there are undoubtedly some who find such a candidate, as you point out in the last sentence of the post above, in some anticipated form of (bc).

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, April 17, 2022 -- 8:31 PM

If emotions have no physical

If emotions have no physical essence, there are not enough priors in the universe to ensure they can be replicated to take on the onus of actual emotion. I am not an essentialist concerning emotion, and no amount of mimicry will create genuine emotion in machines until the physical model of emotions is instantiated. No one is doing that; I'm not sure it could ever be done. So we disagree there, and that is OK. There is no right or wrong to that.

Re: (c):(y)::(y):(bc) and defecation. My comparison is for experience, intelligence and wisdom only, not emotion or proprietary human experiences and the knowledge that garners. Emotion is not part of the statement but could be qualified out of intelligence; I should have been more careful. I have no hope of AI ever achieving actual emotion (I could be wrong.)

AI is not a big computer. It is not a computer at all. It will be able to compute, but it will also be able to intuit and create. AI will "live" (if that is the proper term) for much longer periods and cycle information at different rates and through multiples paths quite differently from humans. An AI's experience in terms of quality and quantity will be greater than human experience, and therefore its intelligence and wisdom will be much more significant in the long term. There could be some fudge, as forgetting is as intelligent an experience as remembering, but in general, AI will likely not suffer distress disorders. If there is wisdom and intelligence that can only be garnered from human experience, AI will miss out on that. We don't need to add the mundane aesthetics, negative and positive, to allow AI a greater likely outcome with respect to experience, intelligence (of the non-emotional sort,) and wisdom.

Google's attribution to divinity in AI is from the paper -https://arxiv.org/pdf/2002.05202.pdf,但这并不是唯一的参考。它是人工智能伦理课程中教授的一种。这与人工智能的可解释性有关。在那里,研究人员将人工智能算法的好处归因于神圣的意图。

"4 Conclusions
...These architectures are simple to implement, and have no apparent computational drawbacks. We offer no explanation as to why these architectures seem to work; we attribute their success, as all else, to divine benevolence."

Again, AI is not a big computer (that would be good old-fashioned AI - GOFAI; instead, the AI worthy of the (c):(y)::(y):(bc) comparison is a different sort of algorithm machine altogether. One that is likely an amalgam of machine learning, GOFAI, and quantum annealer or Hadamard gate-based algorithms. When knowledge is not explained, it is often referred to as divinely inspired, and I don't think this is the pre-Christian Aristotelean god necessarily doing the inspiring.

It's OK to disagree on these things and move on. Essentialism is a non-starter for me, but these are good questions that still need work.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Monday, April 18, 2022 -- 3:03 PM

What is Essentialism? Are

What is Essentialism? Are you talking about universals?

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Tuesday, April 19, 2022 -- 10:08 AM

Apologies for not having made

很抱歉我没有说清楚。我的问题不是柏拉图或其他作家在谈论什么,而是你在谈论什么。在你4月17日22日晚上8点31分的文章的前两句中,有一些模糊的引用提到了“物理本质”,需要“先验”[原文如此]来“实例化”一个“情感的物理模型”。既然你提到了“本质”,我假定它与你在最后一句话中提到的本质主义有关,但我必须承认,我的理解能力无法探测到任何可理解的意义。关于“物理模型”,你是指非物理(即心理)对象在物理空间中的触觉模型吗?或者你是说一个人的身体在急性情绪状态下发生的事情需要一个模型,但没有?在这两种情况下,你对所谓的“本质主义”的拒绝似乎与一个人是否会让机器发怒的问题没有关系。

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Wednesday, April 20, 2022 -- 10:48 PM

Apology accepted.

Apology accepted.

Emotion could have a physical essence, but I don't think it does. I am not an essentialist in this way, which is the way of John Locke. Or, instead, it could be traits necessary and sufficient as Plato would have it. Many people think emotion is a natural kind in both of these ways, but the preponderance of the evidence is against this view.

人工智能将能够以全新的方式构建贝叶斯先验——至少展望GPT 50修订版的未来。人工智能很可能不会通过选择复制人类情感,当然也不会通过设计复制人类情感,因为还没有人知道人类情感或体验的本质。

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, April 21, 2022 -- 3:42 PM

谢谢你的澄清。

谢谢你的澄清。所以作为一个关于情感的本质主义者意味着情感是事物的本质属性,没有它就不可能存在。这肯定会消除机器得到它的可能性,因为没有人会想说,如果它不是情感,它就不是机器。这里的观点分歧,似乎是机器的偶然属性(即:不是什么使它成为一台机器,而是它作为一台机器能做什么,它是什么纯粹是偶然的,它完全可以忽略它),完美模仿情绪状态的外在表达的可能性,达到令人信服的证据表明,与这种表达有自发的内在联系,而不知道这种联系是什么,并给予同样的可能性,但只有在这种联系是已知或理解的条件下。因此,这种差异并不关系到情绪是否可以成为任何机器的基本属性,而是关系到内在意识和情绪表达之间的联系是否必须在它意外地出现在完整的机器中之前被制造商知道。

也感谢对“前科”一词的澄清。然而,似乎最有能力解决争议的是你对物理本质概念的引用,这在许多人听起来像是一个矛盾。对洛克来说,物质身体的概念是由一个由原始质产生的简单概念,即坚固性,和一个在原始质被划分的方式中已经发现的简单概念,即外延,结合而来的。固体的概念将延伸分为两部分:一是身体的延伸,即可移动的部分的粘合,这是固体;二是空间的延伸,即不可移动的部分的连续,这不是固体。虽然空间是无限可分的,但身体可以通过添加更多的部分而变得更大,但排除了进一步的可分性,因为固体不再是可感知的。如果洛克认为存在物质的本质,那么,它似乎就是“坚固”的概念;但这种可能性显然被排除在外,因为它的获取依赖于触觉(参见《关于人类理解的论文》,第四章)。因此,我的假设是,你指的是物理空间中的普遍存在,例如,一个电子可以被认为是。这与我们的情感来自外界的观点非常相似,就像古代世界普遍认为的那样,所以唯一的问题不是机器是否能产生或拥有情感,而是它是否能从外界接收已经存在的情感。我同意这种不可能性应该被拒绝,但目前还不清楚它是如何被想象出来的。

你的建议但情感的机器只能发生在必要且充分的条件下是可能的,因为标准的充分条件是不成问题的,必要条件可能发生在其发展的过程中,在一个情况下,机器可能需要生气为了完成,但仍然不会意味着人们必须知道愤怒是最后火花塞到位之前为了工作。而将情感划分为既存的种类(“自然种类”),也遇到了与物质本质相同的问题,即如何将其中一种情感融入到成品中。

The question of whether one has to know what an emotion is before a machine can do it appears, then, at an impasse. But perhaps a suggestion can be made as to the range of possible solutions. It seems to me that the theoretical possibility must be conceded that there might already exist some machines which can feel emotion, but are unable to express them in a way which can be recognized by humans. Here knowledge on the part of the manufacturer is not required, but by that same token any verification mechanism is precluded. Expanding the range of the question, then, one can ask: Can a machine be predicated by emotional properties without its maker knowing what emotion is, and if so, could a machine have them already without its maker ever knowing about it?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Friday, April 22, 2022 -- 9:15 PM

简短的回答是“不。"

简短的回答是“不。" Conscious Machines almost certainly will never have anything like genuine emotion.

Sometimes the best writing is found in Book Reviews, and I found one that is the absolute most concise and well-written bit on what we are going back and forth.

This guy at Cal Tech, David Anderson, is a Platonic Essentialist (he thinks emotion fitting a necessary and sufficient category is accurate.) You might like his stuff, and I do not. Another guy, Jaak Panksepp, is a Lockean Essentialist (he thinks emotion resides in subcortical neural circuits.) Jaak wrote the book on the Lockean view ==> Affective Neuroscience. Anderson wrote the book on the Platonic idea ==> The Neuroscience of Emotion. There are others (Steve Pinker is a Lockean but thinks the physical basis resides in our genes.) All these essentialist philosophers are wrong, terribly wrong. Google, Huawei, Baidu, Facebook, Microsoft, all the major pharmaceutical giants, and all major and some minor governments are spending billions of dollars chasing these larks. While some build their careers on this mistaken view, people who are the target of this tech and "science" are losing their jobs, time, money, and some, their lives.

我关注了波士顿东北大学的丽莎·费尔德曼·巴雷特——她曲折的学术故事令人难以置信。像大多数人一样,我过于信任别人。我也相信纽约大学的乔·勒杜——他放弃了恐惧的本质主义模型,并在生物学和情感方面写出了一些最好的作品。这些人是自然建构主义者,在情感和生理上遵循严格的进化论。这些都是我的人。当你让我理清思路时,我会回到他们的作品中,重新阅读我每次阅读时写的笔记。

好消息是巴雷特评论了安德森的《情绪神经科学》一书。这篇两页的综述——列出了试图向有意识的机器灌输情感(以及性别、种族和民族,以及我们认为是人类领域的任何一个类别)的本体论问题。我可以向人们解释这些事情。他们同意我的看法,认为这太棒了!他们皈依了,然后转过身来告诉我他们的狗能理解他们的每一个想法。它是困难的。本质论是不能被否定的。这不科学。以下是链接。我知道你想让我解释。 I have tried. Read this review and see if you don't come over to the dark side. Emotion is constructed on the random lattice of our biology. We can not instill this emotion in a conscious machine – without building a Golem or Swampman or robot so exacting as to be a replica of human biology. OK… enough preamble. Here is the link. Read these two pages and change your life forever.

https://www.affective-science.org/pubs/2019/barrett-current-biology-revi...

If that didn't work, more power to you Daniel. David Anderson just did a podcast on Brain Science. Enjoy. But don't pretend to be a lover of wisdom going down that path. This is not to say there isn't enough grant funding to be found doing it. The money is on essentialism, but it is a lazy and misleading philosophical view.

https://brainsciencepodcast.com/bsp/2022/195-emotion-anderson

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, April 24, 2022 -- 4:41 PM

--Charlatans just for the

——骗子只是为了钱,是吗?在你所讨论的语境中,这个术语似乎非常专业,指的是跨物种的单一情感类型的概念,研究者可以识别并进行比较。如果这些情绪状态持续的时间超过了产生它们的刺激,人们可能会得出这样的结论:它们是拥有它们的有机体的基本属性。也就是说,由于对个体有机体的威胁有足够的反应,因此作为物种生存的必要条件,发生情绪状态的物种不可能没有情绪状态而存在,因此被推断为这些物种的基本属性,因为它们是由生存条件的环境变化所选择的。这种模式的好处是,对情感的研究可以密切地辅助生物学研究。它的问题是,你几乎必须给任何成功应对威胁的有机生命赋予某种情感,在威胁消失后,与反应相关的生物体的变化仍在继续。例如,在一场即将到来的野火威胁下,一片树木排泄出的土壤中,某种化学单宁的增加,在这种本质主义中,可以有一些理由说感到恐惧,如果大火被转移,单宁的生产会继续下去。这是反直觉的,除此之外,研究者还会不断地将自己的假设引入对象中,即什么构成了某种情绪状态或其他状态,从而扭曲了观察到的东西。所以我也不觉得这有什么意义,除非我们谈论的是更接近人类的物种,比如高级灵长类。但是,出于同样的原因,可测量的情感反应将仅仅是直觉物种接近的偶然。

但是机器可以被认为是完全不同的。在我看来,现在的问题是,一个人是否必须有一个正确的、可验证的关于情感是什么的模型,也就是说,必须充分理解它,然后才能造出一台机器来做这件事。我在上面文章的最后一段尝试扩大这个问题的范围,包括一些机器是否可能已经拥有情感,而它们的创造者或用户却没有意识到这一点。请注意,对第一个问题的肯定回答并不一定排除对第二个问题的肯定回答。假设一个人不能在不知道如何完成某件事的情况下造出一台机器来完成某件事,这并不一定意味着他造出来的某些东西无论从一开始还是从流水线上下来之后都可能完成它。对于反本质主义者来说,这种情况的不可能性不能成为反对它的论据,因为后者必须承认,一种可以普遍应用的成功的情感模式并不存在,而且在所有的可能性中都不存在。如果一个人说不出情感是什么,也就说不出它不在哪里。So in my view, if driven out of biology, Emotion-Essentialism comes back through back door in the development of apparently intelligent machines.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, April 24, 2022 -- 6:48 PM

You are welcome to your view

You are welcome to your view and alternative facts. You are safe from Science ever disproving you wrong. Perhaps they will instead prove you right.

我有一个用弦理论做的漂亮的桥,如果你想买的话。

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, April 24, 2022 -- 7:52 PM

No thanks, but an argument

No thanks, but an argument would be nice. And since you're suggesting that my fidelity to fact-based truths is less than ideal, could you do your readers a favor by pointing out which ones are the alternates?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, April 24, 2022 -- 9:35 PM

Your backdoor can not be

Your backdoor can not be refuted, and you are welcome to view machines as having actual emotion thereby. There is a chance it could be true. In fact, it can never be disproven. That is a good alternative.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Tuesday, April 26, 2022 -- 5:42 PM

You've mistaken an hypothesis

你把假设错当成了对事实的断言。表明它不能被排除与断言它是这样是不同的。反本质主义者的论点是,机器不能有情感,因为必须有人来建造和安装它们;因为没有人知道情感到底是什么,所以它是无法建立的(至少目前是这样)。这是我对你的立场的解读,在你4月20日22日,10:48 pm的帖子的第二段就很好地概括了。这意味着,对情感的了解是机器拥有它的必要条件。如果一个反例是可能的,不管它是否可能被找到,那么知识条件就不是一个必要条件。本质主义者可能会争辩说,这样一个反例存在于一种遥远的可能性中,即情感可以在一个已经制造出来的机械装置中自行发生。The logic is elementary:

Key: (A) Clear and distinct knowledge of what emotion is;
(B) a manufactured mechanical device which possess emotion.

If (A) then (B).
Not (A).
Therefore, either (B) or not (B), --on account of the fact that "if (A) then (B)" is not equivalent to "if (B) then (A)". Whether or not (B) can be demonstrated to be the case without (A) is irrelevant. If the Anti-Essentialist must concede its conceivable possibility, then respective knowledge cannot be a necessary condition for emotional machines.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Tuesday, April 26, 2022 -- 9:09 PM

You are welcome to this view

我之前说过,欢迎你接受这个观点。你到底打算怎么证实?你看到的是自己的倒影,而不是真实情感的深处。事实上,这是你唯一的选择。机器非常善于反思。他们自己的经历将是完全不同的。

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Wednesday, April 27, 2022 -- 9:11 PM

What needs to be verified

在假设的情况下,什么事情的可能性需要验证?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, April 28, 2022 -- 4:31 AM

Value.

Value.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, April 28, 2022 -- 8:10 PM

--The value of the of the

——可能性的价值实际上是真实的,例如,面包师的电动搅拌器在误用的情况下变得无法察觉的愤怒?还是指假设本身的价值?第一种情况与论证无关。机器可能的情绪状态对人类是否有任何价值,与它在一个假设前提下的操作无关,这个假设前提排除了对情绪的某些知识的必要制约。第二个是相关的,但站在论证之外,考虑整个讨论的好处摆在第一位。这里的验证将是一个私人的,主观的事情,因此不会影响前提有效性。因此,我有必要重申,仅理论上的可能性(机器具有情感,而不是由其制造商安装)就足以驳斥反本质主义者的立场,即对情感的认识是机器拥有情感的必要条件。这是因为一个人不能排除已知事物存在于未知事物中的可能性,如果一个人不能清楚明确地说出被排除的是什么。

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, May 1, 2022 -- 4:22 AM

Objective value.

Objective value.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, May 1, 2022 -- 11:50 AM

False. Preclusion of

False. Preclusion of necessary conditioning by epistemic confirmation can not preclude objective possibility of existence. Just because the cause something understood to exist is not clearly known, that doesn't imply that it can't exist somewhere where it is not observed to exist. You should be more careful in your reasoning.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, May 1, 2022 -- 1:20 PM

How will you objectively

你将如何客观地证实这一点?

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, May 1, 2022 -- 4:39 PM

Verification is not needed

Verification is not needed for admissibility of objective possibility, since no existence-claim is made, but rather only the preclusion of theoretical non-admissibility on account of the fact that, unlike a designed product, knowledge of what it is cannot be a necessary condition for it to exist.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Tuesday, May 3, 2022 -- 9:34 PM

Logic is not inherently

Logic is not inherently objective, and neither is your argument for possibility.

If anyone can claim objectivity regarding emotion in conscious machines, it is the neuroscientists who are looking for these essences of emotion, regardless of their philosophical view.

Logic, and math, are human constructs and are necessarily subjective. If we disagree on that, that is another bailiwick. Hopefully, we can agree that logic does little to help where degrees of belief and not absolute confidence are at issue, and most matters are of this kind, as is this case regarding the nature of emotion.

我承认你;情感可能是必要的,而我们还没有发现这些本质。这不大可能,但却是可能的。

一种情感是否是本质的规范问题,不是由逻辑驱动的,而是由概率驱动的。最好是相信最可能的模型,这是放弃的最具挑战性的选择。

That every essential model of emotion has been disproved; that even David Anderson and Ralph Adolphs, who are two of the best scientists to push the essential model, created criteria called emotion primitives to show evidence of essence; and finally, that this debate has been somewhat universally resolved in favor of non-essentialist models, in humans at least, and perhaps at most; for these three probabilistic reasons it is harder to give up the idea of construction and emergence over essentialism which drives my belief in this model.

There is little value in possibility when all one has to do is find one actual example of emotional essence to establish the claim; yet, years of experimentation looking for essential emotion have come up empty, and hundreds of supposed instances of essences have failed to meet muster under closer scrutiny. These last two points are value-laden knowledge, and I do not see much value in holding that my emotions are essential to my body.

So, suppose non-essentialism is accurate, and I propose it is; where do emotions reside?

Bayesian approximations don't cut it. We can create more and more emoticons, and we can set a GPT 50 on a quest to duplicate emotion by drawing on human experiences. Each time we do this, we draw closer to a model of emotion that has little to do with the raw human emotions that, all possibility aside, are not bound by logic whatsoever.

Importantly when babies cry, they do not express emotion. They express affect. They use Bayesian learning as they mature; however, they learn this emotion and construct their feelings from the social context and human experience derived from the experience of their body.

我不知道一个有意识的机器在觉醒后会承担什么样的身体。很多东西都依赖于这个身体,就像很多东西依赖于人类产生情感的身体一样。情感从某种物质阶梯爬进人的知觉,而这个矩阵很可能(或部分地)是逻辑产生的媒介。没有人能接近解释这些奥秘。我们不会更接近于假设有意识的机器会反映我们自己的经验,至少在这些机器接近我们产生情感的系统之前不会。

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, April 21, 2022 -- 4:36 AM

修改你的回复是公平的。

Editing your response is fair. Editing after another has responded is not productive. Daniel, you are misguided and, on this topic, largely out of your depth. But if you edit your responses after I have responded, not only are you wrong but also unethical. If that happens again this interlocution is over. What is Philosophy Daniel?

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, April 21, 2022 -- 10:19 AM

Philosophy is usually

Philosophy is usually translated as "lover of wisdom" which to me makes no sense, as I've previously pointed out in a reply of 1/9/22, 6:09 pm to participant Thistle on the "Could Robots Be Persons"- program page. Whatever edits have been done on my part are for grammar and typos and not for meaning. But thanks for the heads up. Thanks also for pointing out how misguided and uninformed I am on this topic. I know nothing about computers and less about the boundaries of consciousness. Certainly, in the overflowing generosity of your bounty, you'll fill me in on the details when I run into trouble.

但回到你的问题,哲学家质疑大多数人不会质疑的假设,这使他们不同于工匠和地方官员。这是一种真正的冒险。但最重要的是,哲学很有趣,那些从事哲学研究的人不是为了金钱或声望,而是,总的来说,因为他们喜欢哲学。

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Friday, April 22, 2022 -- 9:12 PM

I appreciate your response

I appreciate your response slightly more than I regret posting the comment that spawned it, and I do regret it mightily. Daniel, you are an honest seeker, and I respect you; thank you for your time here.

你和我在哲学的本质上有分歧。我,至少,读哲学没有乐趣。大多数时候,哲学家们觉得创造他们自己的术语是义不容辞的。达成共识对我来说并不容易,我不得不把文章重新读几遍(每次都有不同的解读)。这就是为什么时间戳跳跃会让我暂停,因为我必须重新阅读以理解我已经花了多少时间来理解和回应。

One term that most philosophers can come together on is ethics. Be it work ethic in plying the trade or ethics of the business itself. Many bloggers take umption at tete-a-tete. Philosophers, in general, respect the other's opinion and can spot a troll a mile away. I have been called a troll before, rightly so.

I'm not trolling you here when I tell you my definition. Philosophy is two things; morality and perspective.

Ethics and morality, which are essentially the same for my definition, are the foundations of the wisdom that philosophers seek. Even when two seekers can't agree on Ethics, they quickly come to terms. This is the study of how we should live.

视角不是一个基础,而是思想的一个非常个人化的方面,在现代问答中经常被称为“观点”。每一种思维和研究方式都有自己的观点,不幸的是,由于观点的个人性质,每个哲学家都有自己的术语或符号。

In our interactions, we often disagree on the second item – that of perspective. Perhaps this show is one such item (are there natural kinds of emotion?)

I apologize for my previous comment; and it was not fair given your response, and I regret it.

I've read and agree to abide by the Community Guidelines