Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    ISBN: 9783662540336
    Language: English
    Pages: 1 Online-Ressource (xii, 261 Seiten)
    Series Statement: The Frontiers Collection
    Parallel Title: Erscheint auch als The technological singularity
    Parallel Title: Print version Callaghan, Victor The Technological Singularity : Managing the Journey
    DDC: 100
    RVK:
    Keywords: Computer science ; Electronic books ; Einzigkeit ; Technischer Fortschritt ; Wissenschaftsethik ; Künstliche Intelligenz ; Risikomanagement
    Abstract: Foreword -- References -- Acknowledgements -- Contents -- 1 Introduction to the Technological Singularity -- 1.1 Why the "Singularity" Is Important -- 1.2 Superintelligence, Superpowers -- 1.3 Danger, Danger! -- 1.4 Uncertainties and Safety -- References -- Risks of, and Responses to, the Journey to the Singularity -- 2 Risks of the Journey to the Singularity -- 2.1 Introduction -- 2.2 Catastrophic AGI Risk -- 2.2.1 Most Tasks Will Be Automated -- 2.2.2 AGIs Might Harm Humans -- 2.2.3 AGIs May Become Powerful Quickly -- 2.2.3.1 Hardware Overhang -- 2.2.3.2 Speed Explosion -- 2.2.3.3 Intelligence Explosion -- References -- 3 Responses to the Journey to the Singularity -- 3.1 Introduction -- 3.2 Post-Superintelligence Responses -- 3.3 Societal Proposals -- 3.3.1 Do Nothing -- 3.3.1.1 AI Is Too Distant to Be Worth Our Attention -- 3.3.1.2 Little Risk, no Action Needed -- 3.3.1.3 Let Them Kill Us -- 3.3.1.4 "Do Nothing" Proposals-Our View -- 3.3.2 Integrate with Society -- 3.3.2.1 Legal and Economic Controls -- 3.3.2.2 Foster Positive Values -- 3.3.2.3 "Integrate with Society" Proposals-Our View -- 3.3.3 Regulate Research -- 3.3.3.1 Review Boards -- 3.3.3.2 Encourage Research into Safe AGI -- 3.3.3.3 Differential Technological Progress -- 3.3.3.4 International Mass Surveillance -- 3.3.3.5 "Regulate Research" Proposals-Our View -- 3.3.4 Enhance Human Capabilities -- 3.3.4.1 Would We Remain Human? -- 3.3.4.2 Would Evolutionary Pressures Change Us? -- 3.3.4.3 Would Uploading Help? -- 3.3.4.4 "Enhance Human Capabilities" Proposals-Our View -- 3.3.5 Relinquish Technology -- 3.3.5.1 Outlaw AGI -- 3.3.5.2 Restrict Hardware -- 3.3.5.3 "Relinquish Technology" Proposals-Our View -- 3.4 External AGI Constraints -- 3.4.1 AGI Confinement -- 3.4.1.1 Safe Questions -- 3.4.1.2 Virtual Worlds -- 3.4.1.3 Resetting the AGI -- 3.4.1.4 Checks and Balances
    Abstract: 3.4.1.5 "AI Confinement" Proposals-Our View -- 3.4.2 AGI Enforcement -- 3.4.2.1 "AGI Enforcement" Proposals-Our View -- 3.5 Internal Constraints -- 3.5.1 Oracle AI -- 3.5.1.1 Oracles Are Likely to Be Released -- 3.5.1.2 Oracles Will Become Authorities -- 3.5.1.3 "Oracle AI" Proposals-Our View -- 3.5.2 Top-Down Safe AGI -- 3.5.2.1 Three Laws -- 3.5.2.2 Categorical Imperative -- 3.5.2.3 Principle of Voluntary Joyous Growth -- 3.5.2.4 Utilitarianism -- 3.5.2.5 Value Learning -- 3.5.2.6 Approval-Directed Agents -- 3.5.2.7 "Top-Down Safe AGI" Proposals-Our View -- 3.5.3 Bottom-up and Hybrid Safe AGI -- 3.5.3.1 Evolutionary Invariants -- 3.5.3.2 Evolved Morality -- 3.5.3.3 Reinforcement Learning -- 3.5.3.4 Human-like AGI -- 3.5.3.5 "Bottom-up and Hybrid Safe AGI" Proposals-Our View -- 3.5.4 AGI Nanny -- 3.5.4.1 "AGI Nanny" Proposals-Our View -- 3.5.5 Motivational Scaffolding -- 3.5.6 Formal Verification -- 3.5.6.1 "Formal Verification" Proposals-Our View -- 3.5.7 Motivational Weaknesses -- 3.5.7.1 High Discount Rates -- 3.5.7.2 Easily Satiable Goals -- 3.5.7.3 Calculated Indifference -- 3.5.7.4 Programmed Restrictions -- 3.5.7.5 Legal Machine Language -- 3.5.7.6 "Motivational Weaknesses" Proposals-Our View -- 3.6 Conclusion -- Acknowledgementss -- References -- Managing the Singularity Journey -- 4 How Change Agencies Can Affect Our Path Towards a Singularity -- 4.1 Introduction -- 4.2 Pre-singularity: The Dynamic Process of Technological Change -- 4.2.1 Paradigm Shifts -- 4.2.2 Technological Change and Innovation Adoption -- 4.2.3 The Change Agency Perspective -- 4.2.3.1 Business Organisations as Agents of Change in Innovation Practice -- 4.2.3.2 Social Networks as Agents of Change -- 4.2.3.3 The Influence of Entrepreneurs as Agents of Change -- 4.2.3.4 Nation States as Agents of Change -- 4.3 Key Drivers of Technology Research and Their Impact
    Abstract: 4.4 The Anti-singularity Postulate -- 4.5 Conclusions -- References -- 5 Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda -- 5.1 Introduction -- 5.1.1 Why These Problems? -- 5.2 Highly Reliable Agent Designs -- 5.2.1 Realistic World-Models -- 5.2.2 Decision Theory -- 5.2.3 Logical Uncertainty -- 5.2.4 Vingean Reflection -- 5.3 Error-Tolerant Agent Designs -- 5.4 Value Specification -- 5.5 Discussion -- 5.5.1 Toward a Formal Understanding of the Problem -- 5.5.2 Why Start Now? -- References -- 6 Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process -- 6.1 Introduction -- 6.2 Key ASI R&D Risk and Decision Issues -- 6.3 Risk Analysis Methods -- 6.3.1 Fault Trees -- 6.3.2 Event Trees -- 6.3.3 Estimating Parameters for Fault Trees and Event Trees -- 6.3.4 Elicitation of Expert Judgment -- 6.3.5 Aggregation of Data Sources -- 6.4 Risk Management Decision Analysis Methods -- 6.5 Evaluating Opportunities for Future Research -- 6.6 Concluding Thoughts -- Acknowledgements -- References -- 7 Diminishing Returns and Recursive Self Improving Artificial Intelligence -- 7.1 Introduction -- 7.2 Self-improvement -- 7.2.1 Evolutionary Algorithms -- 7.2.2 Learning Algorithms -- 7.3 Limits of Recursively Improving Intelligent Algorithms -- 7.3.1 Software Improvements -- 7.3.2 Hardware Improvements -- 7.4 The Takeaway -- References -- 8 Energy, Complexity, and the Singularity -- 8.1 A Contradiction -- 8.2 Challenges -- 8.2.1 Climate Change -- 8.2.2 Biodiversity and Ecosystem Services -- 8.2.3 Energy-or, Where's My Jetsons Car? -- 8.2.4 The Troubles with Science -- 8.3 Energy and Complexity -- 8.4 Exponentials and Feedbacks -- 8.5 Ingenuity, not Data Processing -- 8.6 In Summary -- Acknowledgements -- References
    Abstract: 9 Computer Simulations as a Technological Singularity in the Empirical Sciences -- 9.1 Introduction -- 9.2 The Anthropocentric Predicament -- 9.3 The Reliability of Computer Simulations -- 9.3.1 Verification and Validation Methods -- 9.4 Final Words -- References -- 10 Can the Singularity Be Patented? (And Other IP Conundrums for Converging Technologies) -- 10.1 Introduction -- 10.2 A Singular Promise -- 10.3 Intellectual Property -- 10.3.1 Some General IP Problems in Converging Technologies -- 10.3.2 Some Gaps in IP Relating to the Singularity -- 10.4 Limits to Ownership and Other Monopolies -- 10.5 Owning the Singularity -- 10.6 Ethics, Patents and Artificial Agents -- 10.7 The Open Alternative -- References -- 11 The Emotional Nature of Post-Cognitive Singularities -- 11.1 Technological Singularity: Key Concepts -- 11.1.1 Tools and Methods -- 11.1.2 Singularity: Main Hypotheses -- 11.1.3 Implications of Post-singularity Entities with Advanced, Meta-cognitive Intelligence Ruled by Para-emotions -- 11.2 Post-cognitive Singularity Entities and their Physical Nature -- 11.2.1 Being a Singularity Entity -- 11.2.1.1 Super-intelligent Entities -- 11.2.1.2 Transhumans -- 11.2.2 Post Singularity Entities as Living Systems? -- 11.3 Para-emotional Systems -- 11.4 Conclusions -- Acknowledgements -- References -- 12 A Psychoanalytic Approach to the Singularity: Why We Cannot Do Without Auxiliary Constructions -- 12.1 Introduction -- 12.2 AI and Intelligence -- 12.3 Consciousness -- 12.4 Reason and Emotion -- 12.5 Psychoanalysis -- 12.6 Conclusion -- References -- Reflections on the Journey -- 13 Reflections on the Singularity Journey -- 13.1 Introduction -- 13.2 Eliezer Yudkowsky -- 13.2.1 The Event Horizon -- 13.2.2 Accelerating Change -- 13.2.3 The Intelligence Explosion -- 13.2.4 MIRI and LessWrong -- 13.3 Scott Aaronson -- 13.4 Stuart Armstrong
    Abstract: 13.5 Too Far in the Future -- 13.6 Scott Siskind -- 13.6.1 Wireheading -- 13.6.2 Work on AI Safety Now -- 14 Singularity Blog Insights -- 14.1 Three Major Singularity Schools -- 14.2 AI Timeline Predictions: Are We Getting Better? -- 14.3 No Time Like the Present for AI Safety Work -- 14.4 The Singularity Is Far -- Appendix -- The Coming Technological Singularity: How to Survive in the Post-human Era (reprint) -- References -- References -- Titles in this Series
    URL: Volltext  (lizenzpflichtig)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    [Erscheinungsort nicht ermittelbar] : Chapman and Hall/CRC | Boston, MA : Safari
    Language: English
    Pages: 1 online resource (227 pages)
    Edition: 1st edition
    Keywords: Electronic books ; local
    Abstract: A day does not go by without a news article reporting some amazing breakthrough in artificial intelligence (AI). Many philosophers, futurists, and AI researchers have conjectured that human-level AI will be developed in the next 20 to 200 years. If these predictions are correct, it raises new and sinister issues related to our future in the age of
    Note: Online resource; Title from title page (viewed June 17, 2015) , Mode of access: World Wide Web.
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    [Erscheinungsort nicht ermittelbar] : MDPI - Multidisciplinary Digital Publishing Institute
    ISBN: 9783039218554 , 9783039218547
    Language: English
    Pages: 1 Online-Ressource (206 p.)
    Abstract: Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield “collateral benefits” for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility
    Note: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...