[ad_1]
Soon, self-driving cars will be easier to hide in sight. Rooftop lidar sensors are likely to shrink, which currently marks many. Mercedes vehicles with the new, partially automatic Drive Pilot system, which moves the lidar sensors behind the car’s front grille, are already indistinguishable from ordinary human-controlled vehicles with the naked eye.
Is this a good thing? as part of us Driverless Futures At University College London, my colleagues and I have recently concluded the largest and most comprehensive project. citizen attitudes research to self-driving vehicles and road rules. After doing more than 50 in-depth interviews with experts, one of the questions we decided to ask was whether autonomous cars should be tagged. The consensus of our sample of 4800 UK citizens is clear: 87% agree with the statement “If a vehicle is self-driving, it should be clear to other road users” (only 4% disagree, the rest are unsure).
We sent the same questionnaire to a smaller group of experts. Less convinced: 44% agreed and 28% disagreed that a vehicle’s status should be advertised. The question is not simple. There are valid arguments on both sides.
In principle, we could argue that humans need to know when they are interacting with robots. This was the argument put forward in a report by the UK in 2017. Engineering and Physical Sciences Research Council. “Robots are manufactured artifacts,” he said. “They should not be deceptively designed to exploit vulnerable users; instead, machine nature should be transparent.” If self-driving cars on public roads are actually being tested, other road users can be considered as subjects in this experiment and should give something like informed consent. Another argument in favor of labeling is that it’s safer to give a vehicle a wide berth, which may not behave like a vehicle driven by a well-practiced human, such as a car driven by a student driver.
There are also arguments against labeling. A label could be seen as a waiver of responsibility by innovators, implying that others should accept and use a self-driving vehicle. And without a clear shared understanding of the limits of technology, it could be argued that a new label will only add confusion to roads already filled with distractions.
From a scientific point of view, labels also affect data collection. If a self-driving car is learning to drive and others know about it and behave differently, this could corrupt the data it collects. Something like this seemed to be on your mind A Volvo executive told a reporter in 2016 “Just to be on the safe side”, the company will use unmarked cars for the proposed self-driving trial on UK roads. “If they were marked in front of a self-driving car by braking really hard or blocking themselves, I’m pretty sure people would challenge them,” he said.
Regardless, the labeling arguments are more convincing, at least in the short run. This discussion is about more than self-driving cars. It gets to the heart of the question of how new technologies should be regulated. Developers of emerging technologies, usually depicts them Disruptive and world-changing at first, regulators tend to paint them only incrementally and smoothly when they knock on the door. But new technologies do not quite fit the world as it is. They reshape the worlds. If we are to recognize the benefits and make the right decisions about the risks, we need to be honest about them.
To better understand and manage the deployment of autonomous cars, we need to dispel the myth that computers will drive just like humans but better. For example, Ajay Agrawal, professor of management, argued Self-driving cars basically just do what drivers do, but do it more efficiently: “Humans have data coming through sensors – cameras in our faces and microphones on the side of our heads – and data comes in, we process data with our monkey brains and then we take action and our actions are very limited: we can turn left, we can turn right, we can brake, we can accelerate.”
[ad_2]
Source link