To Prevent a Robot Apocalypse, We Must Study “Machine Behavior” (April 2019)
ORGANIC PREPPER: Experts have been warning us about potential dangers associated with artificial intelligence for quite some time. But is it too late to do anything about the impending rise of the machines?
EARTH CUSTODIANS: While time seems to ever accelerate in our mad world of fast communication, it is not too late if 5G could be stopped. AI needs 5G to spread its tentacles all over the planet. Halting 5G is very realistic and we even could assume that people will take the matter in their own hands by knocking off the towers. 5G implementation is not a sure thing at this stage.
Last week, a team of researchers made a case for a wide-ranging scientific research agenda aimed at understanding the behavior of artificial intelligence systems. The group, led by researchers at the MIT Media Lab, published a paper in Nature in which they called for a new field of research called “machine behavior.” The new field would take the study of artificial intelligence “well beyond computer science and engineering into biology, economics, psychology, and other behavioral and social sciences,” according to an MIT Media Lab press release.
Let’s get real for a few seconds. See the businessinder.com headline at the bottom page? The so-called experts must rethink the universe. “The fact that we’re seeing something that’s just completely new is what’s so fascinating,” said Shany Danieli, who first spotted the galaxy two years ago … they call it fascinating because speaking that way makes them less suspect of being accused of endorsing nonsensical theories in the first place. Scientists always do that, in order to shift the attention from the fact that they were previously wrong.
Moreover, the so-called experts again have done nothing to enhance the development of mankind, but work for corporations and the state to ensure their paychecks. Humans have been betrayed by a field that should not operate for profits. So when reading about the urge to study “machine behaviors”, God only knows what nefarious plans are going to emerge from such an idea.
Scientists have studied human behavior for decades, and now it is time to apply that kind of research to intelligent machines, the group explained. Because artificial intelligence is doing more collective ‘thinking,’ the same interdisciplinary approach needs to be applied to understanding machine behavior, the authors say.
Psychoanalyzing AI, when the programmers do not even know themselves how these algorithms are going to interact, is futile. AI will not be into “collective thinking” but “hive mind dominance”.
“We need more open, trustworthy, reliable investigation into the impact intelligent machines are having on society, and so research needs to incorporate expertise and knowledge from beyond the fields that have traditionally studied it,” said Iyad Rahwan, who leads the Scalable Cooperation group at the Media Lab.
Or, it is very likely that the experts’ findings will help programmers create more algorithms to elude any investigations further. In other words, to make AI stronger. Knowledge held by various scientific fields evolves in compartments: the lower level never knows what the level higher is up to. One thing is certain, nobody is going to control AI. Nobody. AI is an entity, a species of its own kind.
And let’s just digress a few seconds: there is a theory out there saying that reality is “computer simulation”, so meshing with AI might void our reality and confirm the theory. Only the very top will access the other side of the virtual game.
Earth Custodians do not really support that theory but do not discard it either. We must deal with the stakes involved in our reality, and which is achieving freedom from oppression, computer simulation of not.
“We’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously. This calls for a new field of scientific study that looks at them not solely as products of engineering and computer science but additionally as a new class of actors with their own behavioral patterns and ecology.”
This is just whatever. Now scientists redefine their roles, and regard themselves as observers to guarantee their next paychecks. To remain relevant. How pathetic!
But even if big tech companies decided to share information about their algorithms and otherwise allow researchers more access to them, there is an even bigger barrier to research and investigation, which is that AI agents can acquire novel behaviors as they interact with the world around them and with other agents. The behaviors learned from such interactions are virtually impossible to predict, and even when solutions can be described mathematically, they can be “so lengthy and complex as to be indecipherable,” according to the paper.
Exactly what we anticipated. Intelligent machines will become increasingly unpredictable, the so-called experts are absolutely clueless and will eventually be replaced by machines. Cynically, they aren’t smart enough to fathom that. No PhD needed to figure that out.
“If you were able to look at the statistics and look at the behavior of the car in the aggregate, it might be killing three times the number of cyclists over a million rides than another model,” Rahwan said. “As a computer scientist, how are you going to program the choice between the safety of the occupants of the car and the safety of those outside the car? You can’t just engineer the car to be ‘safe’—safe for whom?“
This example brilliantly explains the dilemma in a nutshell we are faced with. Accepting self-improving machines in our lives will require to give away our consent while accepting that machines will make (lots of) mistakes.
Conclusion? Self-responsibility is inescapable. We must chose to remain masters of our own decisions and implement contributionism and voluntaryism as social models.
A new paper frames the emerging interdisciplinary field of machine behavior
As our interaction with “thinking” technology rapidly increases, a group led by researchers at the MIT Media Lab are calling for a new field of research—machine behavior—which would take the study of artificial intelligence well beyond computer science and engineering into biology, economics, psychology, and other behavioral and social sciences.
Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms. Here we argue that this necessitates a broad scientific research agenda to study machine behaviour that incorporates and expands upon the discipline of computer science and includes insights from across the sciences. We first outline a set of questions that are fundamental to this emerging field and then explore the technical, legal and institutional constraints on the study of machine behaviour. (for subscribers) https://www.nature.com/articles/s41586-019-1138-y
Astronomers just found a 2nd galaxy containing no dark matter — and it may change everything we knew about how galaxies are formed – april 2019