Robots Research Paper

This sample Robots Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of research paper topics, and browse research paper examples.

Abstract

The expansion of human capability through emerging technologies resulting from converging developments in advanced robotics, information technology, and the cognitive sciences has served as a point of fascination for literature and the box office alike, but it also is transforming the landscape of contemporary medicine, science, and technology. Developments in robotics over the past few decades in manufacturing, shipping, and military applications have already begun to profoundly reshape these industries. Similarly, the healthcare, service industries, and the transportation sector are poised to join this robotic revolution through the increasing sophistication of surgical robots and other robotic caregivers. Given the vast potential of this form of emerging technologies, this entry explores the field of robotic technology across a wide range of industries and the subsequent discourse of robot ethics to consider if there are inherent concerns or dangers in the utilization of these emerging technologies. Additional attention is given to ethical considerations and social implications of these technologies, as well as to legal and other policy questions raised with respect to regulating robotics for the public good.

Introduction

Humans have long been fascinated with artificial life, mechanical devices, and automatons. The automata of Greek mythology (especially those created by Hephaistos of Homer’s Illiad), golems of early Jewish literature, Leonardo da Vinci’s mechanical knight (circa 1495), and Mary Shelley’s Frankenstein (early 1800s) stoked the fires of the imagination through a Promethean achievement, homo faber, as creator of artificial life (Lin et al. 2012). In the modern era, the prominence of robots at the World’s Fairs of the 1930s in Chicago and New York (i.e., the Fountain of Science in 1933 and Elektro in 1939) heralded the achievement of humanity’s industrial and technological capabilities come of age, standing at the cusp of a technological parallel to the Manifest Destiny. In the contemporary context, the emergence of such robots as Honda’s ASIMO (introduced in 2000) and more recently Aldebaran Robotics’ NAO (launched in 2004), along with rising interest in such cultural phenomena as the RoboCup and the popularity of films such as Blade Runner; A.I.; I, Robot; and the Terminator franchise, has demonstrated a perennial curiosity in the prospect of humans creating artificial life and particularly humanoid robots.

While the concept of robots is not of recent origin, technical developments over the past few decades in advanced robotics, information technology, and machine learning have positioned robots among several emerging technologies with rising significance for present and near-term ethical consideration. The purpose of this entry is to explore the historical background of robotics; clarify key terminology; examine the landscape of contemporary research, development, and application of robotic technology; and conclude with a discussion of various social, legal, and ethical considerations raised by robotics.

Historical Development And The Sci-Fi Genre

Background

Perhaps unsurprising given the long-standing speculative fascination with the creation of automata and artificial life, the terms “robot” and “robotics” emerge not from among the scientific or other technical disciplines but rather in literary contexts. “Robot” is generally first credited to Karel Čapek’s Czech play R.U.R. (initially published in 1920), which stood for Rossumovi Univerzální Roboti (or Rossum’s Universal Robots as the English subtitle is translated). Such robots were created with the purpose of functioning as a sort of servant class of workers. Meanwhile, the term “robotics,” while clearly derived from “robot,” is ascribed to Isaac Asimov in several of his 1940s short stories. These short stories also famously established the Three Laws of Robotics that serve as a benchmark for mainstream cultural awareness of robotic ethics. The meaning and use of “robot” and “robotics” are occasionally distinguished, with “robotics” including a somewhat broader range of technologies; however, they are typically used synonymously.

Historical reflection on robots frequently connects the contemporary interest in robots with the long-standing interest in the creation of artificial life. Indeed, the robots of Čapek’s R.U.R. are closer to synthetic humans (aka synths) or the replicants of Blade Runner as organic creatures that can pass as human rather than the clearly mechanical droid nature of a C-3PO in Star Wars. As such, the robots of R.U.R. stand in a literary tradition that also connects with Shelley’s Frankenstein. Here autonomous robots are more than mere curiosities of the imagination but serve as a prominent archetype in the contemporary techno culture through which to play out the literary hopes and fears of humanity’s uneasy interaction with the rapid pace of scientific advance and technological innovation (Geraci 2010). Robotics opens the imagination to the human creation of new beings not unlike human beings but composed of machine rather than flesh (Capurro and Nagenborg 2009). Furthermore, the incorporation of robotic technologies into the human body through such concepts as cybernetic organisms, also known as cyborgs (e.g., Bicentennial Man), along with androids that are indiscernible from their human counterparts (such as in Blade Runner), has been used as a powerful expression in contemporary fiction and film to explore the nature of personhood and identity, of what it means to be human, and the limits of being human in a technological age (Schneider 2009). Furthermore, robots have been utilized in literary and cinematic contexts to explore the politics of power (e.g., Chappie), along with gender and sexuality (e.g., Ex Machina).

More than merely sci-fi literary conventions, however, robots offer a direct connection between literature and ethics to explore possible futures at the nexus of science, technology, and human values (Geraci 2010). Such “speculative ethics” may envision either dystopian or utopian futures and serve as useful talking points for technology assessment to explore possible ethical and social implications of emerging technologies. Nordmann (2007) and others question the validity of such approaches by arguing that speculative ethics treats imagined futures as if they already exist and thereby displace actual presenting issues. As with all rapidly evolving arenas of emerging technologies with profoundly disruptive potential to reshape industries and social practices, balancing ethical reflection of existing robotic technologies while anticipating future applications is a delicate task. Merely setting aside the speculative dimension of potential applications in robotics, however, seems to ignore the importance of long-term analysis necessary to implement appropriate policy and regulatory regimes in the conceptual and policy vacuums that exist within the context of the unknown risks of emerging technologies. Indeed, such scale of impact has led several tech innovators, such as Elon Musk and Bill Gates, to suggest that artificial intelligence (AI), and thus sentient robots, present an area of existential risk that demands speculative analysis. Given the rapid pace of research and development in technology arenas and the time necessary to adequately explore emerging areas of ethical inquiry, merely reflecting on existing technologies would result in a responsive model of ethics that never shapes technology development. Technology assessment and ethical analysis of emerging technologies are in some respects necessarily speculative endeavors. However, an appropriate caution must be raised to prevent the conflation of hypotheticals with presenting technologies in such ethical discourse.

Defining Terms

Many experts note the challenges in offering a precise definition of “robot” (Krishnan 2009; Rizza 2013; Lin et al. 2012). Krishnan (2009) suggests this is a result of the complex historical interweaving between the much older historical concept of automata and the notion of robots that emerged with Čapek’s R.U.R. as these reflect differing interests in the broader category of artificial life. Much of the contemporary understanding of “robot” has moved away from organically based autonomous agents (or what is sometimes referred to as “soft robots” or “wetware”) to a more mechanical conception (or hardware model).

In this more contemporary sense, “robot,” according to Krishnan (2009), is “defined as a machine, which is able to sense its environment, which is programmed and which is able to manipulate or interact with its environment” and “therefore reproduces the general human abilities of perceiving, thinking, and acting.” This definition distinguishes robots from a “simple remote-controlled device,” requiring that they “must exhibit some degree of autonomy, even if it is only very limited autonomy.” Robots may come in any size or shape (not requiring a set form or semblance to any living organism), and as technological capability continues to evolve and converge with nanotechnology and biotechnology, the meaning of the term robot could become even more diverse (Krishnan 2009). Indeed, while much of robotic design has been patterned after human likeness or the likeness of other organisms, functional design philosophy has led increasingly to a wide spectrum of robot forms.

As Krishnan identifies, the concept of autonomy is a key aspect of contemporary robotics and incorporates the strong emphasis on machine learning or AI research and development. Rizza (2013; cf. Matarić 2007) similarly argues for an understanding of autonomy (even if defined on a continuum), specifying that “a robot is a programmable machine incorporating any degree of artificial intelligence allowing for some degree of autonomy and an ability to sense, perceive, and act in or on its environment.” While robots may express a wide range of capabilities in these regards, the three qualities of autonomy, perception, and responsiveness are often utilized as broad criteria for identification. To clarify the spectrum of autonomy that may be manifested by robots, Rizza (2013) distinguishes two distinct types: (1) supervised autonomy that includes “nonautonomous systems with significant AI algorithms aiding human decision making” and (2) learning autonomy, which he defines as a “machine having been programmed to learn from and respond to its environment, and [that] operates without further human intervention.”

Current And Potential Applications

Given that robots are so wide-ranging in their applications, what follows is merely a sampling of the various global developments in recent years from some of the more significant industries and social sectors. Such developments emerged initially in the USA in the 1970s but quickly moved to include Japan and Europe and later South Korea, Australia, and a much broader international community of research and development (Lin et al. 2012).

Manufacturing

For much of the late twentieth and early twenty-first century, the vast majority of early robot applications were industrial applications. From the rise of warehouse distribution machines (e.g., online retailer Amazon’s Kiva Systems robots) to increasingly sophisticated industrial manufacturing robots that utilize “train-by-demonstration” technology (e.g., Rethink Robotics’ Baxter and Sawyer industrial robots), robots have revolutionized mass-production models of manufacturing across a wide variety of industries such as automotive manufacturing, food production, textiles, electronics manufacturing, fabrication, etc.

Large-scale adoption of manufacturing robots has led to a long-standing concern that robotic technologies replace jobs (Lin et al. 2012; Gerdes 2014). In this respect, robotic automation could be understood in some sense as the logical outcome of mass production and the assembly line model of manufacturing. Others respond that robotic automation improves the overall quality of the workplace environment by removing dangerous, dehumanizing, and/or repetitive activities from the manufacturing process and thus allows for more humane labor conditions. Furthermore, these proponents claim that the deployment of robots requires engineers and other highly skilled technicians to maintain them. Critics respond that with such automation important skills of craftsmanship are lost increasing technology dependence. Regardless of the net sum of individual jobs, the incorporation of robotic automation offers the prospect of increased efficiencies of certain types of labor. Workforce implications of the increased incorporation of robotic automation across all manufacturing industries will become even more pervasive as industrial robots such as Baxter and Sawyer with decreasing retail costs are increasingly incorporated into the existing workforce.

Military Robotics

Beyond industrial manufacturing, military robotics is one of the most prominent applications of contemporary robots. Many point to the November 2001 use of a Predator UAV (unmanned aerial vehicle or more commonly referred to as a drone) by the CIA to attack purported terrorists in Yemen as the first use of a military robot as an offensive weapon (Krishnan 2009; Rizza 2013; Capurro and Nagenborg 2009). As of 2009, at least 90 nations were believed to have UAVs in their military arsenals, along with a growing number of nations with less sophisticated robotic weapons, such as cruise missiles (Krishnan 2009). Land-based robots are also at various stages of research, development, and military deployment and include such robots as Big Dog, Crusher, Harpy, and Dragon Runner to assist in supply logistics, offensive and defensive weaponry options, and other various activities, including the BEAR military robot which is proposed for search and extraction activities. As Lin et al. (2012) note, military robots “perform a range of duties, such as spying or surveillance (air, land, underwater, space), defusing bombs, assisting the wounded, inspecting hideouts, and attacking targets.”

In less than a decade, US military robots deployed in war zones had grown from virtually zero to over 11,000 by 2009 (Krishnan 2009). With economic incentives such as lower operating costs than manned vehicles and the possibility of more politically palatable military engagements through decreased human troop casualties, expectations for increased reliance upon robotic forces and the future deployment of as yet nonexistent autonomous weapons or “killer robots” seem likely (Krishnan 2009). Indeed, already anticipating these concerns, the US National Defense Authorization Act (NDAA) of 2000 called for one-third of operational deep strike force aircrafts to be unmanned by 2010 and one-third of operational ground combat vehicles to be unmanned by 2015 (Rizza 2013). Subsequent military planning led to the US Department of Defense release of the Joint Robotics Program Master Plan FY 2005 and the subsequent FY2009–2034 Unmanned Systems Integrated Roadmap.

Military robots, though, are not merely reserved for offensive capabilities, but also include a variety of other applications. Several automated air defense systems have been developed such as the Phalanx, Aegis, C-RAM, and Patriot missile systems deployed by the US military (Krishnan 2009; Rizza 2013), as well as the Israeli Iron Dome. Additionally, militaries utilize and continue to develop various support and detection robots (e.g., iRobot’s Packbot for bomb disposal and reconnaissance missions). Furthermore, the US Defense Advanced Research Projects Agency (DARPA n.d.; Krishnan 2009) has invested in multiple-research prize competitions which included the Grand Challenge (2004, 2005) and the subsequent Urban Challenge (2007) promoting autonomous land-based vehicles and the Robotics Challenge (2012–2015) promoting robot involvement in search and rescue operations. Similar military competitions have been offered in Germany (European Land-Robot Trial), the UK (Grand Challenge), and Singapore (TechX Challenge) (Krishnan 2009).

As military robots continue to advance in their sophistication and capability, significant attention has been given to the distinction between two basic types of military robots, those that are remotely controlled (tele-operated) and those that are self-directed or autonomous (Krishnan 2009). Human operators are in full or partial control of tele-operated machines and would follow traditional paradigms for rules of engagement. Meanwhile fully autonomous systems remain mere speculative possibilities but raise significant ethical questions about the nature of warfare utilizing autonomous military robots. Rizza (2013) argues that the trend particularly within military robotics will be toward “greater autonomy, learning, and adaptive systems able to collaborate with other machines and humans,” with “greater survivability and resiliency.” Such systems will increasingly transition from requiring multiple human operators for a single unmanned system to a single operator observing an “autonomous, collaborative swarm of many vehicles.”

Transportation

As of 2015 the fully autonomous vehicle, via driverless cars and other autonomous vehicles, continues to inch toward commercial realization. Autopilots on airplanes offer increasing degrees of autonomous air transportation during flights while accompanied by human pilots for takeoffs and landings and for flight conditions that deviate from normal operating protocols. Google’s foray into driverless cars may have popularized the mainstream interest in autonomous transportation, but heavy investment by the automotive industry has led to a variety of advances in the increasing autonomy of vehicles in the consumer market from parking-assist features to automatic breaking. Despite these advances most projections for the mainstream adoption of driverless automobiles remain decades in the future. Proponents of such vehicles suggest increased fuel efficiency from caravan-style transportation along with decreased vehicular accidents due to increased sensory capacity and the removal of human error. Critics raise concerns over inadequate models for liability consideration should accidents occur and models for securing the integrity of systems from hack, along with privacy considerations related to the increased use of sensors and monitoring to aid the vehicles. The possibility of autonomous trucking platforms could revolutionize delivery logistics but would also substantially impact employment paradigms and transportation models.

Medical And Healthcare

Medical robots describe a small subset of robots with a wide range of applications. Surgical robots, for instance, include several applications of robotic systems used in surgical interventions. Robotic surgery has been around since the 1994 US FDA approval of the AESOP system that enabled a surgeon to control an endoscope by voice command (Capurro and Nagenborg 2009). Beyond external or assistive support, another class of robotic systems performs semiautonomous surgical interventions such as a prostate removal with a PROBOT or some forms of orthopedic surgery, both of which involves a cooperative analysis between a robotic system and a surgeon. In such a system, the surgeon is involved in the planning and monitors the intervention, which is executed by the robot. Other surgical robots such as the Zeus and the da Vinci® systems are tele-operated by a surgeon (Capurro and Nagenborg 2009; Lin et al. 2012) for both surgical interventions and diagnostic purposes.

Beyond these applications, research is exploring the development of “internal diagnostic robots endowed with locomotion abilities” for procedures examining the colon and other sections of the intestine (Capurro and Nagenborg 2009). One futuristic application of such robots involves the development of the proposed ARES (assembling reconfigurable endoluminal surgical) ingestible system designed to work in the gastrointestinal tract (Tibbals 2011; Lin et al. 2012). Other futuristic applications of this technology envision the development of nanoscale medical diagnostics and even nanosurgical interventions. Such nanobots reflect the kind of molecular machines envisioned as the speculative future of regenerative medicine, where medical monitoring and surgical intervention happen at the molecular and cellular level in an ongoing basis.

Another application of medical robots includes a range of rehabilitation systems such as the InMotion ARM™ that can assist with upper limb rehabilitation or the Lokomat® system designed for lower limb rehabilitation. Cognitive, emotional, and psychological rehabilitation has also begun to incorporate different types of robots, “such as the PARO, which looks like a baby seal, are designed for therapeutic purposes, such as reducing stress, stimulating cognitive activity, and improving socialization” (Lin et al. 2012; Capurro and Nagenborg 2009). Other uses include robots to assist in the socialization of children with severe autism spectrum disorders.

Additional developments in medical robotics involve various advanced prosthetics and the use of either noninvasive or invasive brain-computer interfaces (BCI) or interfaces with the peripheral nervous system (Capurro and Nagenborg 2009). Military and private sector investment to advance research on the next generation of robotic prosthetics has developed significantly in the effort to assist veterans who have been paralyzed or lost limbs during military interventions since the turn of the century. Such devices while therapeutic in nature also raise the prospect of elective augmentation and more speculatively of cybernetic organisms.

A further segment of medical robots includes those developed to perform activities similar to those of nursing assistants and pharmacy technicians. RIKEN’s RIBA assists in the care of patients, with the ability to lift a patient out of the bed and transfer them to and from a wheelchair. Another robotic system, ROBOT-Rx® from Aesynt or PillPick® from Swisslog, automates the organization, inventory, and dispensing of prescription medications for pharmacies, particularly within hospital settings.

Finally, an emerging area of medical robots expands the tele-operation model of certain surgical robots by expanding into the area of telepresence allowing doctors to make bedside visits and perform consultations remotely. The emergence of such options for the growing practice of telemedicine offers significant opportunities and challenges for traditional conceptions and practices of medicine, particularly with respect to historic emphases on the importance of therapeutic touch and an orientation toward care versus more of a contemporary emphasis on diagnostics and technique that emerging models of telemedicine appear to offer.

Labor, Hospitality, And Care Services

While not nearly as mainstream in its adoption as the military and manufacturing sectors, the labor and service industry is also increasingly populated by robotic technologies. From consumer devices like Roomba vacuum cleaners to automated lawn mowers and robots that can iron clothes or move things around the home, one count suggested that, as of 2012, there were some seven million plus service robots in circulation performing various home-based chores (Lin et al. 2012). Japan has been a key innovator in developing service robots for a variety of sectors, as its aging population, declining birthrates, and shrinking workforce present a particular dilemma with rising burden present especially in the labor, hospitality, and medical services industries. Robotic receptionists such as ChihiraAico, the humanoid communications robot developed by Toshiba, and other emotional or social robots such as Aldebaran’s Pepper offer informational capabilities along with emotional or behavioral intelligence for a range of potential applications in retail services and hospitality industries.

Beyond mere social interaction and limited function, home-based services, a wide range of service, and labor robots are being designed for restaurant food preparation (such as robotic cooks and bartenders) and personal care robots for the elderly and children (Lin et al. 2012). Furthermore, a number of entertainment robots, many of which are designed as toys, are finding alternative use in such settings to serve as personal companions. Additional applications of robots designed for personal companionship include sex robots, which raise a host of additional considerations (Turkle 2010; Lin et al. 2012). Like their healthcare assistance counterparts, such robots that serve as companions and forms of assistive robots raise important considerations regarding the nature of human relationships, community, and friendship as such (Brey et al. 2014; Turkle 2010; Lin et al. 2012).

Entertainment

Finally, a wide spectrum of consumer entertainment robots exists that functions either as dual “edutainment or education-entertainment robots” such as ASIMO, NAO, LEGO® MINDSTORMS®, and iCub. As Lin et al. (2012) note, “[t]hough they may lack a clear use, such as serving specific military or manufacturing functions, they aid researchers in the study of cognition (both human and artificial), motion, and other areas related to the advancement of robotics.” Beyond these educational tools exist a host of discovery and entertainment devices that fall more broadly into the category of robotic toys such as Sony’s robotic dog AIBO. Such crossover edutainment robots such as Aldebaran’s NAO have led to the rising mainstream interest in the annual RoboCup. Founded in 1997, the RoboCup is an annual international competition to promote robotics and AI research and the goal of “developing by 2050 a robotic soccer team capable of winning against the human team champion of the FIFA World Cup” (RoboCup 2014). Finally, as noted in the context of medical robots, increasing capabilities in remote presence are being developed that include robotic telepresence or what is referred to as robot avatars. Such robotic telepresence may have a variety of applications for the entertainment industry such as the development of a remote tourism industry or robot-mediated communication such as is envisioned in William Gibson’s novel The Peripheral (2014).

Ethical Dimensions

As Guglielmo Tamburrini notes, “[r]obot ethics is a branch of applied ethics which endeavours to isolate and analyse ethical issues arising in connection with present and prospective uses of robots” (Capurro and Nagenborg 2009). Robot ethics includes both the actions of an individual robot and the designer-builder. At a base level, a primary ethical concern is that robots should be prevented from doing “harm to people, to themselves, to property, to the environment, etc.” such that they “do not pose serious risks to people in the first place, just like any other mass-produced technology” (Capurro and Nagenborg 2009). However, as robots become increasingly more sophisticated and autonomous in their design capabilities, Peter Asaro argues that “it will become necessary to develop more sophisticated safety control systems that prevent the most obvious dangers and potential harms” but will also “require greater social, emotional, and moral intelligence” (Capurro and Nagenborg 2009).

In this respect, one thinks of Isaac Asimov’s Three Laws of Robotics as an early iteration of this primary aim of such prevention. Asimov’s Three Laws first appeared in a 1942 short story “Runaround” – later incorporated into I, Robot (1950) as part of a collection of several short stories and further developed in his Robot series beginning with Caves of Steel (1954). According to Asimov the Three Laws state (Asimov 1954):

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov’s Three Laws were revised in his R. Daneel Olivaw trilogy to clarify as implied that a robot may not knowingly injure or knowingly allow a human being to come to harm. Furthermore, in Robots and Empire and again later in Foundation and Earth, Asimov added a fourth, or what is referred to in the novel as a zeroth law, to precede the others, “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” The Three Laws were hardwired or hardcoded into the positronic brains of all (or almost all) robots in the Asimovian universe, such that their violation resulted in the incapacitation of the positronic brain (cf. plotlines in The Naked Sun and The Robots of Dawn). The Three Laws have had a wide influence in science fiction literature involving robots, such as Jack Williamson’s With Folded Hands, and the various extensions by various authors of the Foundation and Robot series. Furthermore, through such box office success of films such as I, Robot (2004) and Bicentennial Man (1999), the Three Laws have come to reflect a tacit awareness of robot ethics in mainstream culture. Despite work in embodied cognition, machine learning, and other cognitive sciences, as well as work in the moral enhancement of humans, the practicalities of hardcoding or hardwiring such laws of prevention remain a work of science fiction. As Asimov anticipates in The Robots of Dawn, mere recognition of what qualifies precisely as human would be difficult for robots.

Agency, Responsibility, And Autonomous Machines

Asaro suggests there are three distinct ways we may consider the ethics of robotics: (1) “humans acting through, or with robots” in which humans “are the ethical agents”; (2) “think[ing] practically about how to design robots to act ethically, or theoretically about whether robots could be truly ethical agents” in which “robots are the ethical subjects”; or (3) how we “construe the ethical relationships between humans and robots, and whether ethical agents might have certain duties towards robots” (Capurro and Nagenborg 2009). Robot ethics, he argues, should be focused primarily on “prevent[ing] robots, and other autonomous technologies, from doing harm, and only secondarily something that resolves the ambiguous moral status of robot agents.” In this regard, he clarifies that robots are causal agents, not moral agents, since moral agents “adhere to a system of ethics when they employ that system in choosing which actions they will take and which they will refrain from taking” (Capurro and Nagenborg 2009). While this may be the case, Asaro cautions against setting up a false dichotomy of causal agency between amoral and moral agents but rather argues for “a continuum from amorality to fully autonomous morality” capable of including such degrees of “quasi-moral” agency that are recognized for instance with children (Capurro and Nagenborg 2009). Furthermore, attention needs to be given to what would constitute artificial moral agents (Wallach and Allen 2009; Lin et al. 2012). Given that robots as technological artifacts are products of engineering, they embody the design principles or values of their makers. In this respect, the engineers and others involved in the production of robots must be cognizant of their responsibility and – therefore, the moral weight – of their various design decisions (Capurro and Nagenborg 2009).

Risk, Unintended Consequences, And The Precautionary Principle

While the potential benefit of robots is vast, their potential societal costs have not gone unnoticed. Furthermore, the disruptive nature of robotic technologies and risks of unintended consequences cannot be merely set aside. On a basic level, primary risk assessment must begin with considerations of safety and errors. As Lin et al. (2012) note, “With robotics, the safety issue is with their software and design. Computer scientists, as fallible human beings, understandably struggle to create a perfect piece of complex software . . . . even a tiny software flaw in machinery, such as a car or a robot, could lead to fatal results.” Corporations involved in the production of robots could “be held legally responsible for any harm [the robots] do to the public” (Capurro and Nagenborg 2009; Lin et al. 2012). While this may be true, as Asaro notes, legal responsibility or liability, however, should not be confused with the additional burden of moral responsibility that such entities also should bear in the design and development as discussed earlier. That said emphasizing the legal responsibilities of manufacturing allows safety considerations due to the possibility of corporate risk exposure for liability to shape the legal requirements of engineering design and provide a preliminary framework for assessing agency and responsibility (Capurro and Nagenborg 2009; Lin et al. 2012).

Beyond basic safety considerations and risks of errors, there are broader concerns that include the security of robot systems and the potential risk for hacking or otherwise compromised robotic systems that lead to unauthorized override of actions. Such concerns have already been raised for the prospect of driverless vehicles as increasing integration of information technologies and driver assist functions in private vehicles have led to safety concerns that hackers could take over braking or acceleration systems. These security issues “will become more important as robots become networked and more indispensible to everyday life, as computers and smartphones are today” (Lin et al. 2012). Consideration must be given to set in place proper regulations of liability that ensure that risks are carefully assessed with respect to both safety and the possibility of errors, as well as models for ongoing security of systems.

Furthermore, unintended consequences are often examined with respect to short-term effects, but long-term consequences also may result that are not properly anticipated or thoroughly considered. One such consideration that has been raised is the unintended consequence of technological dependence. Some have raised the possibility that automation through advances such as self-driving cars, autopilots in airplanes, GPS, factory robots, etc. are leading to increasing levels of dissatisfaction and even loss of important skill sets (Carr 2014; Lin et al. 2012).

In the midst of uncertain risk with emerging technologies such as advanced robotics, the precautionary principle is often invoked. In its common understanding, this principle “demands the proactive introduction of protective measures in the face of possible risks, which science at present (in the absence of knowledge) can neither confirm nor deny” (Cameron and Mitchell 2007). In doing so care must be taken such that the precautionary principle is employed in such a manner so as to avoid stifling technological innovation. However, in the case of disruptive technologies, the precautionary principle offers a conceptual framework to advance prudently in their research, development, and commercialization until real risks can be distinguished from phantom risks and appropriately managed.

Role Of Technology Assessment

Beyond the more narrow areas of risk assessment and analyses to assess effectiveness and economic impact, emerging technologies such as nanotechnologies should also account for the broader context of technology assessment. Accordingly, Henk ten Have (2007) has noted that technology assessment should include a “broader conception” that takes into consideration “the social and ethical consequences of technologies.” This broader conception may include an examination of “the value judgments at play in recommendations and determine if and how those recommendations were not simply scientific but also normative” (ten Have 2007).

Such considerations must go beyond the technology itself in its technical dimensions to examine values that are underlying or inherent within the technologies or to assess whether the technologies are “justified in the light of moral values” (ten Have 2007). This is particularly important in advanced technology arenas such as robots that derive from among other things advances in information and communication technologies. Such broader approaches to technology assessment include what is often referred to as science, technology, and society (STS) and examine such considerations of technological ontology and thus the value-ladenness of technologies, as well as the individual and social implications for humanity (Verbeek 2011; Turkle 2010). These broader concerns of technology assessment seek to examine the possibility of designing and shaping technology to promote values such as community and well-being and what might broadly be conceived of as the good life in an advanced technological age (Brey et al. 2014).

Robots And The Global Context

Military Robots, Ethics, And International Law

Current domestic and international law seem ambivalent toward military robots. A key issue that is often raised concerns the impunity in warfare due to technological asymmetry. As Rizza (2013; cf. Capurro and Nagenborg 2009; Lin et al. 2012) notes, “On the spectrum of impunity, there comes a point where noncombatants are held at greater risk than technologically superior combatants.” This is demonstrated by controversies surrounding the US military and covert operations using drone strikes and the resulting collateral damage of noncombatant casualties. Absent risk of casualties due to superiority of advanced weaponry, Rizza fears whether remote killings by robotic weaponry will continue to be seen “as military force or war at all” (2013).

Military robots, particularly those designed with lethal or defensive capabilities (such as for peacekeeping or police functions), will place robots in challenging scenarios of life and death involving human soldiers and civilians. The prospect for collateral damage appears high with significant reservations regarding the prospect to adequately address the translation of “rules of engagement” (ROE) for autonomous military robots based on foreseeable technological developments in the near term (Capurro and Nagenborg 2009). In this respect significant technical challenges exist in the discrimination of targets from noncombatants and the translation of ROE that results in rigid guidance for robot behavior or what may also be referred to as laws of warfare (LOW) or laws of armed combat (LOAC). Furthermore, semiautonomous and especially autonomous platforms raise significant security concerns regarding the prospect for unauthorized access that compromises systems (such as may occur through hacks and malware) or overrides systems altogether. Beyond these considerations, significant concerns have been raised that short of an international moratorium on the development of lethal autonomous robots (LAR), an arms race likely will occur as countries seek to insulate their populations from direct combat threats and exert increasing scales of military power through threat deterrence and lethal capability. Such developments with LAR will require the development of international mechanisms for arms control similar to that of weapons of mass destruction.

Human-Robot Interaction And Moral Reciprocity

 One major area of ethical reflection in robot ethics regards the scope of issues raised by human-robot interactions (HRI) and its relationship to the antecedent reflections on human-computer interfaces (HCI). Here such wide-ranging issues are raised as behavioral and emotional intelligence, privacy and electronic monitoring, machine learning and cognition, as well as more advanced questions related to moral response to autonomous agents such as robots, and along with it the prospect of robot rights. With the rising ubiquity of sensors and smart devices evolving as part of the Internet of Things, the ability of robots to access such data will raise important considerations regarding the notion of privacy in conflict with personalization. Initial mixed responses to the 2013 release of Google Glass, an early wearable device incorporating a head-mounted display and camera, suggest that privacy considerations will be a significant issue as the impact of more pervasive global acceptance of the Internet of Things occurs and robots with similar monitoring technologies are incorporated into mainstream society.

Furthermore, the prospect of incorporating machine learning will offer increasingly social dimensions to HRI as humans learn to interact with responsive robots. Such interactions already exist between humans and other animals but may offer more complex social interactions which may be influenced to a greater or lesser degree by the more humanoid or humaniform appearance of certain types of robots. Researchers in the field of social robotics and affective computing have sought to increase machine recognition of emotional intelligence, as well as to explore the social implications of what Cynthia Breazeal and others are researching in the area of social robots (e.g., Kismet developed by Breazeal and others at the MIT research labs in the 1990s or more recently Jibo initially scheduled for release for a commercial audience in 2016).

A prominent area of social robotics and HRI regards the potential use of robots as companions, personal care robots, and even as sex robots. Such potential uses not only reflect a spectrum of values and intentions but also convey the complexity of HRI. How does the assignment of a robot as a companion for personal care of the elderly or the severely impaired reflect the humane care of a person in need? Or, does it in some way designate an assignment of the elderly or severely impaired to a sort of bare life existence, or does it reflect the exigencies of healthcare systems unable to accommodate the necessary human care components required for aging populations (such as are projected in Japan)?

Additionally, while expectations are clearly set regarding the moral behavior of robots, what expectations, if any, should be placed on the moral behavior of humans toward robots or other potential autonomous agents? Is the status of robots as moral agents, and thus one presumes possessing of a sort of autonomous decision-making capacity, determinative to the import of this question (Capurro and Nagenborg 2009)? Or, should moral reciprocity be considered prior to the possibility of AI? Should robots be extended certain rights or expectations regarding their treatment (Lin et al. 2012)?

Convergence, AI, And Human Futures

Robots and the convergence of AI as sentient machines frequently appear together in discussions of emerging technologies. In 2002, the National Science Foundation and the US Department of Commerce commissioned the report Converging Technologies for Improving Human Performance that introduced the acronym NBIC (nanotechnology, biotechnology, information technology, and cognitive science) and along with it the convergence of previously disparate fields into a sort of technological singularity. Such convergence the report argued would include exponential increases in the capability to improve health, overcome disability, and even permit human enhancement and post-human technologies including a variety of developments in advanced robotics. Other convergence proposals such as GRIN (genetic, robotic, information, and nano-processes) more explicitly identified the significance of robotics to these developments.

The convergence of such emerging technologies may open exponential leaps forward in such areas as regenerative medicine including nanoscale robots, along with increasingly intelligent assistive devices and other autonomous agents, but more modestly will exacerbate already existing social challenges regarding the pervasive use of information and communications technologies in such areas as privacy, social relationships, and technological moral agency. One particular issue these advances will raise is the challenge to distinguish between humans as agents and technological artifacts, particularly as these technologies are increasingly interactive through direct interfaces such as BCI or cyborg technologies, as well as the development of autonomous robots and the prospect of AI. Speculative proposals for robotics extend into the convergence of neuroscience and the possibility of cognitive uploads into robot avatars.

The prospect of increased HCI by means of advanced prosthetics raises a broader conversation regarding the convergence of technologies and the prospect for human futures. Cybernetic organisms or cyborgs would retain autonomous decision-making capability but raise important considerations regarding the incorporation of technology into the human body and questions of identity in relationship to the human body. As such augmentation incorporates neural technologies, this further complicates questions about identity and person. Existing conversations in neuroethics and philosophy of mind are exploring the implications of outsourcing cognitive processes through proposals such as the extended mind hypothesis. The prospect of semi-intelligent or artificially intelligent robotic devices directly interfacing with human beings raises profound questions about the nature of the human person and identity.

While true AI is not essential to emerging applications for robotics, it offers another area of convergence between machine learning, information and communication technologies, and robotics and one that is given a high degree of interest as such machines become increasingly complex in the interaction with their respective environments and humans. Increasingly sophisticated machine learning systems and the prospect of AI raise profound prospects and existential risks for human futures. As with all areas of emerging technologies, a robust infrastructure for technology assessment is essential.

The threat of AI and robots running amok is a common place in science fiction. From the Terminator film franchise to Daniel Wilson’s Robopocalypse (2012), such narratives reflect common nightmare scenarios of robot uprisings breaking free from the control of their human makers. These visions are often said to reflect general uneasiness with the scale of power that is being wielded in the creation of advanced technological artifacts such as robots and the pursuit of artificial intelligence, a fear that the tool will outgrow its maker. At least one response to such possibilities is that all robotic technologies and AI systems be created with a “kill switch” that would allow a human operator to automatically shut down the machine. Of course, such kill switches would raise inherent problems of design vulnerability for military robots created with intentional offensive capabilities and thus may represent only a partial solution (Capurro and Nagenborg 2009). Such apocalyptic scenarios similar to that of the gray goo scenarios with nanotechnology are often raised as speculative ethics, and while they may play a role in technology assessment, care must be taken to utilize such scenarios as if they are necessary outcomes for such technological development.

The convergence of these emerging technologies also invokes the language of singularity as popularized by Ray Kurzweil and later by proponents of transhumanism or H+. In many streams of transhuman and posthuman thought, humanity takes control of its evolutionary future through its technological prowess, and robots often are envisioned to play a significant role in this respect. For some the notion of human futures ends in the legacy of humanity that is bequeathed to our mind children, the sentient machines humans create and will eventually evolve beyond their human creators (Moravec 1999).

Possibilities of sentient machines and posthumanism along with the convergence of robotics and neuroscience raise important considerations of personhood, human identity, the limits of human nature, and ultimately what constitutes the status of being human. Considerations such as discussion of the common good, human flourishing, and human futures should be brought to bear in an analysis of converging technologies and human enhancement. While perhaps speculative in nature, the possibility of such futuristic technological outcomes should be included as part of a broader analysis of technology assessment when applied to robots. These speculative analyses may be distinct from analyses of presenting technologies but should not be ignored in a more complete analysis of robots as such.

Conclusion

Robots have demonstrated the potential to transform the landscape of industrial manufacturing, medical care and therapeutics, and transportation, as well as a wide spectrum of consumer and entertainment applications. As one of several emerging technologies, robots must be carefully examined for their potential benefits along with careful consideration given to immediate concerns for potential risks as well as examinations of the broader impact of such technologies for considerations of human nature and human futures both in their individual and global dimensions. Proper technology assessment of these technologies must explore not only important considerations of risk assessment for safety and security with respect to individual applications but should raise broader considerations so as to anticipate and address social implications that may occur as a result of this rapidly evolving field of technology.

Bibliography :

  1. Asimov, I. (1954). The caves of steel. Garden City: Doubleday.
  2. Brey, P., Briggle, A., & Spence, E. (Eds.). (2014). The good life in a technological age. New York: Routledge.
  3. Cameron, N., & Mitchell, E. (Eds.). (2007). Nanoscale: Issues and perspectives for the nano century. Hoboken: Wiley-Interscience.
  4. Capurro, R., & Nagenborg, M. (Eds.). (2009). Ethics and robotics. Heidelberg: AKA GmbH.
  5. Carr, N. (2014). The glass cage: Automation and us. New York: W. W. Norton & Company.
  6. (n.d.). DARPA robotics challenge. Retrieved March 21, 2015, from http://www.darpa.mil/our_ work/tto/programs/darpa_robotics_challenge.aspx
  7. Geraci, R. (2010). Apocalyptic AI: Visions of heaven in robotics, artificial intelligence, and virtual reality. New York: Oxford University Press.
  8. Gerdes, L. (Ed.). (2014). Robotic technology. New York: Greenhaven Press.
  9. Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Burlington: Ashgate.
  10. Lin, N., Abney, K., & Bekey, G. (Eds.). (2012). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.
  11. Matarić, M. (2007). The robotics primer. Cambridge, MA: MIT Press.
  12. Moravec, H. (1999). Robot: Mere machine to transcendent mind. New York: Oxford University Press.
  13. Nordmann, A. (2007). If and then: A critique of speculative nanoethics. Nanoethics, 1(1), 31–46.
  14. Rizza, M. S. (2013). Killing without heart: Limits on robotic warfare in an age of persistent conflict. Washington, DC: Potomac Books.
  15. Robocup 2014. (2014). Retreived April 4, 2015, from http://www.robocup2014.org/?page_id=238
  16. Schneider, S. (Ed.). (2009). Science fiction and philosophy: From time travel to superintelligence. Malden: WileyBlackwell.
  17. ten Have, H. A. M. J. (Ed.). (2007). Nanotechnologies, ethics and politics. Paris: UNESCO Publishing.
  18. Tibbals, H. (2011). Medical nanotechnology and nanomedicine. Boca Raton: CRC Press.
  19. Turkle, S. (2010). Alone together: Why we expect more from technology and less from each other. New York: Basic Books.
  20. Verbeek, P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: University of Chicago Press.
  21. Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. New York: Oxford University Press.
  22. Decker, M., & Gutmann, M. (Eds.). (2012). Roboand Information ethics: Some fundamentals. Zurich: Lit Verlag GmbH.
  23. Gunkel, D. (2012). The machine question: Critical perspectives on AI, robots, and ethics. Cambridge, MA: MIT Press.
  24. Lin, N., Abney, K., & Bekey, G. (Eds.). (2012). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.
  25. Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the twenty-first century. New York: Penguin.

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655