Results for 'AI limitations'

997 found
Order:
  1.  22
    Paediatric Palliative Care during the COVID-19 Pandemic: A Malaysian Perspective.Lee Ai Chong, Erwin J. Khoo, Azanna Ahmad Kamar & Hui Siu Tan - 2020 - Asian Bioethics Review 12 (4):529-537.
    Malaysia had its first four patients with COVID-19 on 25 January 2020. In the same week, the World Health Organization declared it as a public health emergency of international concern. The pandemic has since challenged the ethics and practice of medicine. There is palpable tension from the conflict of interest between public health initiatives and individual’s rights. Ensuring equitable care and distribution of health resources for patients with and without COVID-19 is a recurring ethical challenge for clinicians. Palliative care aims (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  44
    Textile Diagrams. Florian Pumhösl's Abstraction as Method.T'ai Smith - 2015 - Zeitschrift für Medien- Und Kulturforschung 2015 (1):101-116.
    For Viennese artist Florian Pumhösl »abstraction is a method«, not a category. Or rather, if abstraction is the defining category of modernism, the objective is to reproduce modernism's problems and limits and exploit relationships among its parts. Considering what Pumhösl calls the »textile complex« of modernism, this essay examines the artist's work in parallel with Charles Sanders Peirce's diagram concept and Gottfried Semper's use of textile diagrams throughout Style in the Technical and Tectonic Arts. _German_ »Abstraktion« ist für den Wiener (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3.  3
    Textile Diagrams. Florian Pumhösl's Abstraction as Method.T'ai Smith - 2015 - Zeitschrift für Medien- Und Kulturforschung 6 (1):101-116.
    For Viennese artist Florian Pumhösl »abstraction is a method«, not a category. Or rather, if abstraction is the defining category of modernism, the objective is to reproduce modernism's problems and limits and exploit relationships among its parts. Considering what Pumhösl calls the »textile complex« of modernism, this essay examines the artist's work in parallel with Charles Sanders Peirce's diagram concept and Gottfried Semper's use of textile diagrams throughout Style in the Technical and Tectonic Arts.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  5. Can AI Help Us to Understand Belief? Sources, Advances, Limits, and Future Directions.Andrea Vestrucci, Sara Lumbreras & Lluis Oviedo - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):24-33.
    The study of belief is expanding and involves a growing set of disciplines and research areas. These research programs attempt to shed light on the process of believing, understood as a central human cognitive function. Computational systems and, in particular, what we commonly understand as Artificial Intelligence (AI), can provide some insights on how beliefs work as either a linear process or as a complex system. However, the computational approach has undergone some scrutiny, in particular about the differences between what (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6. AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  7.  8
    Can AI-Based Decisions be Genuinely Public? On the Limits of Using AI-Algorithms in Public Institutions.Alon Harel & Gadi Perl - 2024 - Jus Cogens 6 (1):47-64.
    AI-based algorithms are used extensively by public institutions. Thus, for instance, AI algorithms have been used in making decisions concerning punishment providing welfare payments, making decisions concerning parole, and many other tasks which have traditionally been assigned to public officials and/or public entities. We develop a novel argument against the use of AI algorithms, in particular with respect to decisions made by public officials and public entities. We argue that decisions made by AI algorithms cannot count as public decisions, namely (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  21
    AI, Suicide Prevention and the Limits of Beneficence.Bert Heinrichs & Aurélie Halsband - 2022 - Philosophy and Technology 35 (4):1-18.
    In this paper, we address the question of whether AI should be used for suicide prevention on social media data. We focus on algorithms that can identify persons with suicidal ideation based on their postings on social media platforms and investigate whether private companies like Facebook are justified in using these. To find out if that is the case, we start with providing two examples for AI-based means of suicide prevention in social media. Subsequently, we frame suicide prevention as an (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Turning queries into questions: For a plurality of perspectives in the age of AI and other frameworks with limited (mind)sets.Claudia Westermann & Tanu Gupta - 2023 - Technoetic Arts 21 (1):3-13.
    The editorial introduces issue 21.1 of Technoetic Arts via a critical reflection on the artificial intelligence hype (AI hype) that emerged in 2022. Tracing the history of the critique of Large Language Models, the editorial underscores that there are substantial ethical challenges related to bias in the training data, copyright issues, as well as ecological challeges which the technology industry has consistently downplayed over the years. -/- The editorial highlights the distinction between the current AI technology’s reliance on extensive pre-existing (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  10.  64
    Justice by Algorithm: The Limits of AI in Criminal Sentencing.Isaac Taylor - 2023 - Criminal Justice Ethics 42 (3):193-213.
    Criminal justice systems have traditionally relied heavily on human decision-making, but new technologies are increasingly supplementing the human role in this sector. This paper considers what general limits need to be placed on the use of algorithms in sentencing decisions. It argues that, even once we can build algorithms that equal human decision-making capacities, strict constraints need to be placed on how they are designed and developed. The act of condemnation is a valuable element of criminal sentencing, and using algorithms (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  90
    Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of the source of their answers’, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  12.  20
    Dutch Comfort: The Limits of AI Governance through Municipal Registers.Corinne Cath & Fieke Jansen - 2022 - Techné Research in Philosophy and Technology 26 (3):395-412.
    In this commentary, we respond to the editorial letter by Professor Luciano Floridi entitled “AI as a public service: Learning from Amsterdam and Helsinki.” Here, Floridi considers the positive impact of municipal AI registers, which collect a limited number of algorithmic systems used by the city of Amsterdam and Helsinki. We question a number of assumptions about AI registers as a governance model for automated systems. We start with recent attempts to normalize AI by decontextualizing and depoliticizing it, which is (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13.  36
    The possibilities and limits of AI in Chinese judicial judgment.Zichun Xu, Yang Zhao & Zhongwen Deng - 2022 - AI and Society 37 (4):1601-1611.
    Artificial intelligence (AI) technology has brought new opportunities and challenges to the judicial field, which dramatically improves judicial efficiency and may even change the judiciary's way. The concept of judicial justice in the information age has a natural affinity with artificial intelligence. As artificial intelligence continues to make breakthroughs in judicial data sorting and deep learning knowledge, judicial artificial intelligence has gradually become a reality. Artificial intelligence can conduct legal argumentation, interpret calculation results, human–computer collaboration, and judicial judgment. At the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  15. The AI gambit — leveraging artificial intelligence to combat climate change: opportunities, challenges, and recommendations.Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2021 - In Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi (eds.), Vodafone Institute for Society and Communications.
    In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  16.  7
    Explainable AI in the military domain.Nathan Gabriel Wood - 2024 - Ethics and Information Technology 26 (2):1-13.
    Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  18.  94
    AI ethics should not remain toothless! A call to bring back the teeth of ethics.Rowena Rodrigues & Anaïs Rességuier - 2020 - Big Data and Society 7 (2).
    Ethics has powerful teeth, but these are barely being used in the ethics of AI today – it is no wonder the ethics of AI is then blamed for having no teeth. This article argues that ‘ethics’ in the current AI ethics field is largely ineffective, trapped in an ‘ethical principles’ approach and as such particularly prone to manipulation, especially by industry actors. Using ethics as a substitute for law risks its abuse and misuse. This significantly limits what ethics can (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   27 citations  
  19. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic utilitarian principle. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  7
    AI Literacy: A Primary Good.P. Benton - 2023 - Springer Nature 1976:31–43.
    In this paper, I argue that AI literacy should be added to the list of primary goods developed by political philosopher John Rawls. Primary goods are the necessary resources all citizens need to exercise their two moral powers, namely their sense of justice and their sense of the good. These goods are advantageous for citizens since without them citizens will not be able to fully develop their moral powers. I claim the lack of AI literacy impacts citizens’ ability to exercise (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  24
    The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations.Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society:1-25.
    In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  23. Central limit theorem for the functional of jump Markov process.Nguyen Van Huu, Quan-Hoang Vuong & Tran Minh Ngoc - 2005 - In Nguyen Van Huu, Quan-Hoang Vuong & Tran Minh Ngoc (eds.), Báo cáo: Hội nghị toàn quốc lần thứ III “Xác suất - Thống kê: Nghiên cứu, ứng dụng và giảng dạy”. Ha Noi: Viện Toán học. pp. 34.
    Central limit theorem for the functional of jump Markov process. Nguyễn Văn Hữu, Vương Quân Hoàng và Trần Minh Ngọc. Báo cáo: Hội nghị toàn quốc lần thứ III “Xác suất - Thống kê: Nghiên cứu, ứng dụng và giảng dạy” (tr. 34). Ba Vì, Hà Tây, ngày 12-14 tháng 05 năm 2005. Viện Toán học / Trường Đại học Khoa học tự nhiên / Đại học Quốc gia Hà Nội.
    Direct download  
     
    Export citation  
     
    Bookmark  
  24.  14
    Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide humans (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  60
    The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations.Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (1):283-307.
    In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  87
    Operationalising AI ethics: barriers, enablers and next steps.Jessica Morley, Libby Kinsey, Anat Elhalal, Francesca Garcia, Marta Ziosi & Luciano Floridi - 2023 - AI and Society 38 (1):411-423.
    By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  27.  27
    The Frame Problem, Gödelian Incompleteness, and the Lucas-Penrose Argument: A Structural Analysis of Arguments About Limits of AI, and Its Physical and Metaphysical Consequences.Yoshihiro Maruyama - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer.
    The frame problem is a fundamental challenge in AI, and the Lucas-Penrose argument is supposed to show a limitation of AI if it is successful at all. Here we discuss both of them from a unified Gödelian point of view. We give an informational reformulation of the frame problem, which turns out to be tightly intertwined with the nature of Gödelian incompleteness in the sense that they both hinge upon the finitarity condition of agents or systems, without which their alleged (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  28.  9
    AI in medicine: recommendations for social and humanitarian expertise.Е. В Брызгалина, А. Н Гумарова & Е. М Шкомова - 2023 - Siberian Journal of Philosophy 21 (1):51-63.
    The article presents specific recommendations for the examination of AI systems in medicine developed by the authors. The recommendations based on the problems, risks and limitations of the use of AI identified in scientific and philosophical publications of 2019-2022. It is proposed to carry out ethical expertise of projects of medical AI, by analogy with the review of projects of experimental activities in biomedicine; to conduct an ethical review of AI systems at the stage of preparation for their development (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - forthcoming - Social Epistemology.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30.  46
    Emotional AI, soft biometrics and the surveillance of emotional life: An unusual consensus on privacy.Andrew McStay - 2020 - Big Data and Society 7 (1).
    By the early 2020s, emotional artificial intelligence will become increasingly present in everyday objects and practices such as assistants, cars, games, mobile phones, wearables, toys, marketing, insurance, policing, education and border controls. There is also keen interest in using these technologies to regulate and optimize the emotional experiences of spaces, such as workplaces, hospitals, prisons, classrooms, travel infrastructures, restaurants, retail and chain stores. Developers frequently claim that their applications do not identify people. Taking the claim at face value, this paper (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  31.  70
    Conservative AI and social inequality: conceptualizing alternatives to bias through social theory.Mike Zajko - 2021 - AI and Society 36 (3):1047-1056.
    In response to calls for greater interdisciplinary involvement from the social sciences and humanities in the development, governance, and study of artificial intelligence systems, this paper presents one sociologist’s view on the problem of algorithmic bias and the reproduction of societal bias. Discussions of bias in AI cover much of the same conceptual terrain that sociologists studying inequality have long understood using more specific terms and theories. Concerns over reproducing societal bias should be informed by an understanding of the ways (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  32.  53
    Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study.Javier Camacho Ibáñez & Mónica Villas Olmeda - 2022 - AI and Society 37 (4):1663-1687.
    Despite the increase in the research field of ethics in artificial intelligence, most efforts have focused on the debate about principles and guidelines for responsible AI, but not enough attention has been given to the “how” of applied ethics. This paper aims to advance the research exploring the gap between practice and principles in AI ethics by identifying how companies are applying those guidelines and principles in practice. Through a qualitative methodology based on 22 semi-structured interviews and two focus groups, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33.  56
    From AI to cybernetics.Keizo Sato - 1991 - AI and Society 5 (2):155-161.
    Well-known critics of AI such as Hubert Dreyfus and Michael Polanyi tend to confuse cybernetics with AI. Such a confusion is quite misleading and should not be overlooked. In the first place, cybernetics is not vulnerable to criticism of AI as cognitivistic and behaviouristic. In the second place, AI researchers are recommended to consider the cybernetics approach as a way of overcoming the limitations of cognitivism and behaviourism.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  86
    AI ethics: from principles to practice.Jianlong Zhou & Fang Chen - 2023 - AI and Society 38 (6):2693-2703.
    Much of the current work on AI ethics has lost its connection to the real-world impact by making AI ethics operable. There exist significant limitations of hyper-focusing on the identification of abstract ethical principles, lacking effective collaboration among stakeholders, and lacking the communication of ethical principles to real-world applications. This position paper presents challenges in making AI ethics operable and highlights key obstacles to AI ethics impact. A preliminary practice example is provided to initiate practical implementations of AI ethics. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  27
    AI and social theory.Jakob Mökander & Ralph Schroeder - 2022 - AI and Society 37 (4):1337-1351.
    In this paper, we sketch a programme for AI-driven social theory. We begin by defining what we mean by artificial intelligence (AI) in this context. We then lay out our specification for how AI-based models can draw on the growing availability of digital data to help test the validity of different social theories based on their predictive power. In doing so, we use the work of Randall Collins and his state breakdown model to exemplify that, already today, AI-based models can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  36.  22
    AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):185-186.
    In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) worries about (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  37.  43
    Generative AI and human–robot interaction: implications and future agenda for business, society and ethics.Bojan Obrenovic, Xiao Gu, Guoyu Wang, Danijela Godinic & Ilimdorjon Jakhongirov - forthcoming - AI and Society:1-14.
    The revolution of artificial intelligence (AI), particularly generative AI, and its implications for human–robot interaction (HRI) opened up the debate on crucial regulatory, business, societal, and ethical considerations. This paper explores essential issues from the anthropomorphic perspective, examining the complex interplay between humans and AI models in societal and corporate contexts. We provided a comprehensive review of existing literature on HRI, with a special emphasis on the impact of generative models such as ChatGPT. The scientometric study posits that due to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38.  23
    Can AI Weapons Make Ethical Decisions?Ross W. Bellaby - 2021 - Criminal Justice Ethics 40 (2):86-107.
    The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Emergent Models for Moral AI Spirituality.Mark Graves - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):7-15.
    Examining AI spirituality can illuminate problematic assumptions about human spirituality and AI cognition, suggest possible directions for AI development, reduce uncertainty about future AI, and yield a methodological lens sufficient to investigate human-AI sociotechnical interaction and morality. Incompatible philosophical assumptions about human spirituality and AI limit investigations of both and suggest a vast gulf between them. An emergentist approach can replace dualist assumptions about human spirituality and identify emergent behavior in AI computation to overcome overly reductionist assumptions about computation. Using (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40.  22
    Bias in algorithms of AI systems developed for COVID-19: A scoping review.Janet Delgado, Alicia de Manuel, Iris Parra, Cristian Moyano, Jon Rueda, Ariel Guersenzvaig, Txetxu Ausin, Maite Cruz, David Casacuberta & Angel Puyol - 2022 - Journal of Bioethical Inquiry 19 (3):407-419.
    To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. ​Studies mentioning biases on AI algorithms developed for contact (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  41. How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  42.  35
    Adopting AI: how familiarity breeds both trust and contempt.Michael C. Horowitz, Lauren Kahn, Julia Macdonald & Jacquelyn Schneider - forthcoming - AI and Society:1-15.
    Despite pronouncements about the inevitable diffusion of artificial intelligence and autonomous technologies, in practice, it is human behavior, not technology in a vacuum, that dictates how technology seeps into—and changes—societies. To better understand how human preferences shape technological adoption and the spread of AI-enabled autonomous technologies, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four diverse (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43.  77
    Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  44.  33
    AI and the Origins of the Functional Programming Language Style.Mark Priestley - 2017 - Minds and Machines 27 (3):449-472.
    The Lisp programming language is often described as the first functional programming language and also as an important early AI language. In the history of functional programming, however, it occupies a rather anomalous position, as the circumstances of its development do not fit well with the widely accepted view that functional languages have been developed through a theoretically-inspired project of deriving practical programming languages from the lambda calculus. This paper examines the origins of Lisp in the early AI programming work (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  43
    Challenges of responsible AI in practice: scoping review and recommended actions.Malak Sadek, Emma Kallina, Thomas Bohné, Céline Mougenot, Rafael A. Calvo & Stephen Cave - forthcoming - AI and Society:1-17.
    Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. Afterwards, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46.  52
    Apprehending AI moral purpose in practical wisdom.Mark Graves - 2022 - AI and Society:1-14.
    Practical wisdom enables moral decision-making and action by aligning one’s apprehension of proximate goods with a distal, socially embedded interpretation of a more ultimate Good. A focus on purpose within the overall process mutually informs human moral psychology and moral AI development in their examinations of practical wisdom. AI practical wisdom could ground an AI system’s apprehension of reality in a sociotechnical moral process committed to orienting AI development and action in light of a pluralistic, diverse interpretation of that Good. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  47.  31
    AI in human teams: effects on technology use, members’ interactions, and creative performance under time scarcity.Sonia Jawaid Shaikh & Ignacio F. Cruz - 2023 - AI and Society 38 (4):1587-1600.
    Time and technology permeate the fabric of teamwork across a variety of settings to affect outcomes which have a wide range of consequences. However, there is a limited understanding about the interplay between these factors for teams, especially as applied to artificial intelligence (AI) technology. With the increasing integration of AI into human teams, we need to understand how environmental factors such as time scarcity interact with AI technology to affect team behaviors. To address this gap in the literature, we (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  48. 15 challenges for AI: or what AI (currently) can’t do.Thilo Hagendorff & Katharina Wezel - 2020 - AI and Society 35 (2):355-365.
    The current “AI Summer” is marked by scientific breakthroughs and economic successes in the fields of research, development, and application of systems with artificial intelligence. But, aside from the great hopes and promises associated with artificial intelligence, there are a number of challenges, shortcomings and even limitations of the technology. For one, these challenges arise from methodological and epistemological misconceptions about the capabilities of artificial intelligence. Secondly, they result from restrictions of the social context in which the development of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  49.  46
    Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  50.  97
    Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias.Ying-Tung Lin, Tzu-Wei Hung & Linus Ta-Lun Huang - 2020 - Philosophy and Technology 34 (S1):65-90.
    This paper focuses on the potential of “equitech”—AI technology that improves equity. Recently, interventions have been developed to reduce the harm of implicit bias, the automatic form of stereotype or prejudice that contributes to injustice. However, these interventions—some of which are assisted by AI-related technology—have significant limitations, including unintended negative consequences and general inefficacy. To overcome these limitations, we propose a two-dimensional framework to assess current AI-assisted interventions and explore promising new ones. We begin by using the case (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
1 — 50 / 997