白泉社マンガ コイン40%還元
実用

How to Teach Morality to AI and Robots(東大教授が挑むAIに「善悪の判断」を教える方法 「人を殺してはいけない」は“いつも正しい”か? 英語版)

あらすじ・内容

With rapid advancement in development of artificial intelligences (AIs) and robots in the current world, some people predict that a society where robots and human beings coexist is approaching in the near future. However, I simply wonder if we could actually get along with robots in this world, where we cannot accept diversity even among the same human beings. If we had robots in this deeply divided world, would not it merely end up causing even greater chaos?
I recently started researching on morality engine to control behavior of robots. Simply put, I study how to make robots distinguish good and evil by themselves for the upcoming future when robots and human beings coexist.
The concept of morality for robots is not anything new. Back in 1940s, for example, an American science fiction writer Isaac Asimov started introducing his famous Three Laws of Robotics in his novels.
The Three Laws are very well known, and some people even treat them like golden rules for robots to observe. However, to me, these Laws seem to hold significant problems and therefore to be unsuitable for practical purposes. As you read this book, you will be able to figure out the fundamental defect in the Laws.
In order to study the moral engine with which to regulate robots, we need to first describe the moral framework of human beings. We can make possible such an attempt to model an abstract concept, by using an engineering way of thinking as a tool. In this book, I would like to think about this framework together with you, using as simple and easy words as possible.
If we can model human morality, we will be able to install it onto brains of robots. If we can build a moral system that robots and human beings – mutually different existences – can share, it will in turn help us to overcome divisions resulting from differences in standpoints among human beings, and to further develop an inclusive and diverse society. Using such a new moral system, I would like to establish alternative principles to Asimov’s Three Laws of Robotics and to think about the possibility of a society where human beings and robots coexist. Morality and robots may seem to have nothing in common — but by looking at the point where these two areas actually cross, we will be able to see principles of a future society that we human beings should aim at.
Throughout this intensive seminar, we are going to freely and widely develop our arguments. I plan to also provide you with a summary and practice exercises at the end of each session to help deepen your understanding. Let us make ourselves ready for thinking outside the box, digging deep into our imagination.

Contents

Introduction
Session 1. Is the “You Shall Not Kill” Rule Universal?
Session 2. Classifying Prior Moral Thoughts
Session 3. You Shall Not Kill… Whom?
Session 4. Modeling the Basic Principle of Morality
Session 5. Classifying Hierarchy of Morality
Session 6. Installing Morality onto Robots
Afterword
Hints for Practice Exercises
References

作品情報

シリーズ
How to Teach Morality to AI and Robots(東大教授が挑むAIに「善悪の判断」を教える方法 「人を殺してはいけない」は“いつも正しい”か? 英語版)(扶桑社)
著者
レーベル
――
出版社
扶桑社
カテゴリ
実用
ページ概数
168
配信開始日
2021/8/27
対応端末
  • PCブラウザ
    ビューア
  • Android
    (スマホ/タブレット)
  • iPhone / iPad
  • 推奨環境
ページ概数

一般的なスマートフォンにてBOOK☆WALKERアプリの標準文字サイズで表示したときのページ数です。お使いの機種、表示の文字サイズによりページ数は変化しますので参考値としてご利用ください。

  • シェア:
  • キャンペーンの内容や期間は予告なく変更する場合があります。
  • サイトに記載の日時は日本標準時 (Japan Standard Time) です。

フォローリストを編集しました

880(税込)

  • 実用 How to Teach Morality to AI and Robots(東大教授が挑むAIに「善悪の判断」を教える方法 「人を殺してはいけない」は“いつも正しい”か? 英語版)

    With rapid advancement in development of artificial intelligences (AIs) and robots in the current world, some people predict that a society where robots and human beings coexist is approaching in the near future. However, I simply wonder if we could actually get along with robots in this world, where we cannot accept diversity even among the same human beings. If we had robots in this deeply divided world, would not it merely end up causing even greater chaos?
    I recently started researching on morality engine to control behavior of robots. Simply put, I study how to make robots distinguish good and evil by themselves for the upcoming future when robots and human beings coexist.
    The concept of morality for robots is not anything new. Back in 1940s, for example, an American science fiction writer Isaac Asimov started introducing his famous Three Laws of Robotics in his novels.
    The Three Laws are very well known, and some people even treat them like golden rules for robots to observe. However, to me, these Laws seem to hold significant problems and therefore to be unsuitable for practical purposes. As you read this book, you will be able to figure out the fundamental defect in the Laws.
    In order to study the moral engine with which to regulate robots, we need to first describe the moral framework of human beings. We can make possible such an attempt to model an abstract concept, by using an engineering way of thinking as a tool. In this book, I would like to think about this framework together with you, using as simple and easy words as possible.
    If we can model human morality, we will be able to install it onto brains of robots. If we can build a moral system that robots and human beings – mutually different existences – can share, it will in turn help us to overcome divisions resulting from differences in standpoints among human beings, and to further develop an inclusive and diverse society. Using such a new moral system, I would like to establish alternative principles to Asimov’s Three Laws of Robotics and to think about the possibility of a society where human beings and robots coexist. Morality and robots may seem to have nothing in common — but by looking at the point where these two areas actually cross, we will be able to see principles of a future society that we human beings should aim at.
    Throughout this intensive seminar, we are going to freely and widely develop our arguments. I plan to also provide you with a summary and practice exercises at the end of each session to help deepen your understanding. Let us make ourselves ready for thinking outside the box, digging deep into our imagination.

    Contents

    Introduction
    Session 1. Is the “You Shall Not Kill” Rule Universal?
    Session 2. Classifying Prior Moral Thoughts
    Session 3. You Shall Not Kill… Whom?
    Session 4. Modeling the Basic Principle of Morality
    Session 5. Classifying Hierarchy of Morality
    Session 6. Installing Morality onto Robots
    Afterword
    Hints for Practice Exercises
    References

    価格

    880(税込)

    800円 (+消費税80円)

付与コインの内訳

408コイン

  • 会員ランク(今月ランクなし)

    1%

  • 初回50%コイン還元 会員登録から30日以内の初回購入に限り、合計金額(税抜)から50%コイン還元適用

複数商品の購入で付与コイン数に変動があります。

会員ランクの付与率は購入処理完了時の会員ランクに基づきます。
そのため、現在表示中の付与率から変わる場合があります。

【クーポンの利用について】
クーポンをご利用の場合、一部のクーポンを除いて、コイン還元キャンペーンの対象外となります。
詳細は各クーポンページをご参照ください。

「How to Teach Morality to AI and Robots(東大教授が挑むAIに「善悪の判断」を教える方法 「人を殺してはいけない」は“いつも正しい”か? 英語版)」評価・レビュー

評価

※評価がないか、表示数に達していません。

本を予約しました

※予約の確認・解除はこちらから

予約済み書籍

キャンセル及び解除等

発売日前日以降のキャンセル・返品等はできません。
予約の確認・解除、お支払いモード、その他注意事項は予約済み書籍一覧をご確認ください。

お得な情報