NEVER TOLERATE TYRANNY!....Conservative voices from the GRASSROOTS.
when and why should we ever trust them?
`
Dave Gershgorn QUARTZ December 7, 2017
`
`
Artificial intelligence is seeping into every nook and cranny of modern life. AI might tag your friends in photos on Facebook or choose what you see on Instagram, but materials scientists and NASA researchers are also beginning to use the technology for scientific discovery and space exploration.
But there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another.
Modern artificial intelligence is still new. Big tech companies have only ramped up investment and research in the last five years, after a decades-old theory was shown to finally work in 2012. Inspired by the human brain, an artificial neural network relied on layers of thousands to millions of tiny connections between “neurons” or little clusters of mathematic computation, like the connections of neurons in the brain. But that software architecture came with a trade-off: Since the changes throughout those millions of connections were so complex and minute, researchers aren’t able to exactly determine what is happening. They just get an output that works.
At the Neural Information Processing Systems conference in Long Beach, California, the most influential and highest-attended annual AI conference, hundreds of researchers from academia and tech industry will meet today (Dec. 7) at a workshop to talk about the issue. While the problem exists today, researchers who spoke to Quartz say the time is now to act on making the decisions of machines understandable, before the technology is even more pervasive.
“We don’t want to accept arbitrary decisions by entities, people or AIs, that we don’t understand,” said Uber AI researcher Jason Yosinkski, co-organizer of the Interpretable AI workshop. “In order for machine learning models to be accepted by society, we’re going to need to know why they’re making the decisions they’re making.”
As these artificial neural networks are starting to be used in law enforcement, healthcare, scientific research, and determining which news you see on Facebook, researchers are saying there’s a problem with what some have called AI’s “black box.” Previous research has shown that algorithms amplify biases in the data from which they learn, and make inadvertent connections between ideas.
For example, when Google made an AI generate the idea of “dumbbells” from images it had seen, the dumbbells all had small, disembodied arms sticking out from the handles. That bias is relatively harmless; when race, gender, or sexual orientation is involved, it becomes less benign.
“As machine learning becomes more prevalent in society—and the stakes keep getting higher and higher—people are beginning to realize that we can’t treat these systems as infallible and impartial black boxes,” Hanna Wallach, a senior researcher at Microsoft and speaker at the conference, tells Quartz in an email. “We need to understand what’s going on inside them and how they are being used.”
In NASA’s Jet Propolusion Lab, artificial intelligence allows the Mars rover to operate semi-autonomously when exploring the surface of an unexplored planet. AI is also used to comb through the thousands of pictures taken by the rover when they’re transmitted back to Earth.
But Kiri Wagstaff, a JPL AI researcher and speaker at the workshop, says AI needs to be understood before it’s used, due to the high risks every decision brings in space.
“If you have a spacecraft in orbit around Mars, it’s 200 million miles away, it costs hundreds of millions of dollars, potentially even a billion dollars, to send there. If anything goes wrong, you’re done. There’s no way to repair, visit, replace that thing without spending an immense amount of money,” Wagstaff says. “So if we want to put machine learning in play, then the people running these missions need to understand what it’s doing and why, because why would they trust it to control their Mars rover or orbiter if they don’t know why it’s making the choices it’s making?”
Wagstaff works on building the AI that sorts through images captured in space by NASA’s various spacecraft. Since those images can number in the millions, having an AI pick out the interesting photos could be a big timesaver—but only if the AI knows what an “interesting” image looks like.
To Wagstaff, being able to understand what the AI is looking for is crucial to implementing the algorithm. If there was a mistake in how it learned to go through images, that could mean passing over data worth the millions of dollars the mission cost.
“Just being presented with an image that a computer said ‘Oh this is interesting, take a look’ leaves you in this sort of limbo, because you haven’t looked at all million images yourself, you don’t know why that’s interesting, what ticked,” Wagstaff says. “Is it because of its color, because of its shape, because of the spacial arrangement of objects in the scene?”
In 2007, Andrew Gordon Wilson, an AI professor at Cornell University and co-organizer of the Interpretable AI workshop, was working with a team to build a new kind of PET scanning machine. Since certain particles didn’t behave in the machine as they did in the rest of the physical world, he was tasked with tracking how a certain particle moved through a tank of xenon.
His adviser suggested trying to use neural networks, which were still relatively obscure at the time. Using the technology, Wilson was able to use the light emitted by the particle to locate it in the xenon chamber.
While he got the answer he was looking for, Wilson says that understanding the internal rules the algorithm had built to understand how light indicated the position of the particle could have opened a new avenue of research.
“In a way, a model is a theory for our observation, and we can use the model not just to make predictions but also to better understand why the predictions are good and how these natural processes are working,” Wilson said.
But to make new ground on interpretability, one of the biggest challenges is simply defining it, says Wallach, the Microsoft researcher.
Does interpretability mean that AI experts know why Facebook’s algorithm is showing you a specific post, or that you understand yourself? Does a doctor using an AI-powered treatment recommendation system need to know why a specific regimen was suggested, or is that another role— like an AI overseer— that needs to be created in a hospital?
Wallach calls interpretability a latent construct: Something unobservable, but instead augured by testing how real people use and misunderstand AI systems. It’s not just a matter of lifting the hood of an algorithm and watching how the engine runs.
Understanding an algorithm isn’t just to fend against bias or make sure your rover won’t fall off a Martian cliff; knowing how a system fails can help AI researchers build more accurate systems.
“If your system doesn’t work and you don’t know why, it’s quite hard to improve it,” Uber’s Yosinski say. “If you do know why it’s failing, oftentimes the solution is a foregone conclusion.”
To figure out how one of its algorithms thinks, Google is trying to sift through the millions of computations made every time the algorithm processes an image. In a paper presented at the NIPS conference, Google researcher Maithra Raghu showed that she was able to fix the unwanted association between dumbbells and arms, only this time with tree bark and birds.
By looking at which neurons in the network were activated when the AI looked at images of birds, Raghu was able to determine which were focusing on the bird and which were focusing on the bark, and then turn the bark neurons off. The success is a sign that, for all its complexity, translating a neural network’s work into something a human understands isn’t impossible.
“In school we ask students to put it in their own words to prove that they’ve understood something, to show their work, justify their conclusion,” Wagstaff says. “And now we’re expecting that machines will be able to do the same thing.”
`
Should machines, computers, and or Artificial Intelligence EVER replace the human factor in decision making?
Looking at how poorly the VOTER POLLS predicted the outcome of the President Trump election, I would say NO.
*
But . . . . were those POLLS the actual AI result? Did a human factor enter into the reported results? Were we getting the REAL AI result or a modified result purposely changed to achieve a predetermined number?
Maybe the AI machines had been right all along but we weren't allowed to know that.
`
.
SUPPORT
REAL CONSERVATIVES
Order our book!
$ 9.95
INSTANT DOWNLOAD
TO ORDER
CLICK HERE:
http://www.lulu.com/shop/raymond-athens/right-side-up/ebook/product-17358205.html
TO ORDER
CLICK HERE:
http://www.lulu.com/shop/raymond-athens/right-side-up/ebook/product-17358205.html
The book RIGHT SIDE UP is a compilation of choice content from this web site...reflecting sometimes forgotten, purely Traditional American Values...
*********************
The Unborn
...let them BE !
TO ORDER
CLICK HERE:
http://tpartyus2010.ning.com/forum/topics/save-a-life-and-maybe-a-soul
*****************
.
.
RICHARD
ALLAN
JENNI'S
THE
DANNY MALONE TRILOGY
CLICK HERE:
http://www.amazon.com/Danny-Malone-Trilogy-Mohammeds-Daughter/dp/1432724932
"The Fox, Golden Gate and Mohammed's Daughter"
Paperback
*************************
© 2024 Created by Your Uncle Sam. Powered by
You need to be a member of REAL CONSERVATIVES to add comments!
Join REAL CONSERVATIVES