У нас вы можете посмотреть бесплатно Philosophy of Mind and Artificial General Intelligence или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
What would AGI look like? How would that relate to how we think about humans? Can we use philosophy of mind (and other philosophies) to think about both humans and AGI to work how if AGI is even possible with our current tech? This episode dips our toes into the works of Thomas Nagel (1974 What is it like to be a bat), René Descartes' "I am thinking therefore I am" (1637 Discourse on the Method), Simone de Beauvoir (1945 The Ethics of Ambiguity) and David Boonin (2014 The Non-identity Problem and the Ethics of Future People) to see how a Large Language Model holds up against these philosophies. The world can be divided into subjects and objects You can ask “What is it like to be” about subjects Thomas Nagel posits that is consciousness We only imagine actions (intention), not experience Can you ask, “What is it like to be an LLM”? What if it had even more knowledge? What if it was trained on every experience? Artificial General Intelligence is considered to be an AI with roughly human level intelligence. That is more confusing than helpful. Now that technology has advanced, the Turing Test is obviously outdated and the ability to speak does not make one intelligent. Take “the pub test”: Would you expect AGI to work independently taking actions of its own accord and generally behaving as a human would? Should AGI have its own goals and desires? If it shouldn’t, how can it plan, behave, or work like a human? If it should, what are the ethics of keeping AGI confined, working as a slave? The mind, according to René Descartes You could be deceived by an evil demon which presents a false world and so you cannot trust any of your senses Cogito, ergo sum, “I am thinking, therefore I am” - the ability to doubt one’s self means there is a thinking mind that exists - if there was no mind how could it doubt itself by means of its own internal thoughts and reasoning An LLM has no intentionality, no actions of its own, no desires, no goals - it is a fixed set of data that can compute any given function If an LLM is a functionally complete unit that does not exist in the world, how could it ever have self-doubt? Therefore an LLM cannot think and cannot be conscious nor sentient The Ethics of Ambiguity - Simone de Beauvoir Our freedom comes from our nothingness - there is no absolute meaning nor purpose in the universe and we are not an object which is only one thing, so we are self-conscious and can transcend our current-selves - a “for-itself” This presents an ambiguity because we are both free as a subject and constrained as an object of others, and leaves us with a difficulty of expressing and living our own freedoms whilst respecting the freedom of others An LLM is one thing and cannot transcend - it is an “in-itself”. An LLM does not have freedoms and the idea of giving it freedoms is nonsensical because what could it do with those? It is an object which only presents an illusion of for-itself with an external interaction - a subject using an object The Non-Identity problem What moral obligation do we have to future people, as any decision will cause some to exist and some to never exist in different states of existence? In an extension of this, David Boonin asks if it’s ethical to conceive a baby with the intention of it having a disability (and what is a life worth living), and that not conceiving a disabled baby would cause a person to never exist - the only way you can exist is the way you exist. To both concepts, a “pub test” would conclude that there is no ethical dilemma with allowing or blocking the creation of any LLM over another, and there would be no dilemma with intentionally creating a “disabled” LLM There is so much noise around “AI” that it’s difficult to stop and think about it If you believe that an AGI would need some theory of mind, then any AI that is a transformer based technology including LLMs can never be AGI We need a fundamentally new technology where we can formulate a theory around it before the AGI conversation can be had In summary: AI as we know it can never pass as a subject, no matter how much training, context, or agentic loops it has. If AI could be AGI without status as a subject, what would that look like? How can it share your goals to work as “general intelligence”?