Pesquisar
Close this search box.

‘Citizen AI’: Teaching artificial intelligence to act responsibly

Publicado em
shutterstock_777026485-e1518305803880

Researchers at Mt. Sinai’s Icahn School of Medicine in New York have a unique collaborator: an in-house artificial intelligence system known as Deep Patient. The researchers have taught Deep Patient to predict risk factors for 78 different diseases by feeding it electronic health records from 700,000 patients, and doctors now turn to the system to aid in diagnoses.

Deep Patient is more than just a program. Like other advanced AI systems, it learns, makes autonomous decisions, and has grown from a technological tool to a partner, coordinating and collaborating with humans. This isn’t surprising, given that four out of five (81 percent) of executives surveyed for Accenture’s most recent Tech Vision report believe that within the next three years, AI will work alongside humans as a coworkers, collaborators, and trusted advisors.

Bringing up baby

For some organizations, AI is already the public face of the business, handling everything from initial interactions via chat, voice, and email to filling vital customer service roles. But any business looking to capitalize on AI’s potential must acknowledge the full scope of its impact. Just as parents hope to raise children who act responsibly and communicate effectively, businesses need to “raise” their AI systems to act as responsible representatives of the business and reflect company and societal norms of fairness and transparency.

AI was initially driven by rules-based data analytics programs, statistical regressions, and early “expert systems.” But the explosion of powerful deep neural networks now gives AI systems something a mere program doesn’t have: the ability to do the unexpected.

For businesses, this means changing how they view AI — from systems that are programmed to systems that learn. After all, education isn’t about teaching someone to do one task, it’s giving someone the tools to approach and solve problems themselves. This is the approach businesses must take with AI. Raising AI requires addressing many of the same challenges we encounter raising and educating children. This includes things like fostering an understanding of right and wrong, imparting knowledge without bias, and building self-reliance while emphasizing the importance of collaborating and communicating with others.

To meet this new responsibility, companies can look to milestones of human development for guidance. First, people learn how to learn, then they rationalize or explain their thoughts and actions, and eventually they accept responsibility for their decisions. With a successfully trained and raised AI, a company essentially creates a new worker — one that can be scaled across operations.

Where to begin?

This process depends on data — the right data, and a lot of it. As children learn to communicate, they often use gestures before words. Ultimately, however, they must achieve the taxonomy of language to scale their understanding of the world. Similarly, a company’s AI starts from basic principles, then progressively builds its skills from set taxonomical structures. The companies with the best data available to train their AI will create the most capable AI systems.

For instance, Google recently released an open source dataset that helps companies teach their AI to understand how people speak. The company recorded 65,000 clips of thousands of different people speaking to create a dataset that would prepare an AI to understand just 30 words in a single language. This scale of training data has enabled Google’s voice recognition to reach 95 percent accuracy.

A moral code

Companies building AI systems must provide a context for their AI and those it will be communicating with, whether customers, employees, or other AI systems. At the same time, companies must use care when selecting taxonomies and training data, as it’s not just about scale but about actively minimizing bias in the data. When researchers curate datasets to minimize bias — as well as documenting, organizing, and properly labeling the data — companies can build a strong library of AI models ready for reuse.

Finally, businesses must raise AI systems to act responsibly. What happens, for instance, if an AI-powered mortgage lender denies a loan to a qualified prospective home buyer or if an AI-guided shelf-stocking robot runs into a worker in a warehouse? Companies using AI must think carefully about apportioning responsibility and liability for its actions — in fact, some already are.

Audi, for example, has announced that it will assume liability for accidents involving its 2019 A8 model when its Traffic Jam Pilot automated system is in use. And the German federal government has adopted ahead-of-the-curve rules around the way autonomous cars should act in an unavoidable accident — choosing material damage over injuring people and not discriminating by gender, age, or race.

Just the beginning

As AI becomes more firmly and widely integrated into society, it will impact everything from financial decisions to health, criminal justice, and beyond. As this sphere of influence expands, the responsibilities around training AI will only grow. Businesses that don’t consider their AI an entity they must raise will struggle to catch up with new regulations and public demands — or worse, unleash problematic AIs that cause strict regulatory controls to be placed upon the entire industry.

Leaders must accept the challenge of raising their AI in a way that acknowledges its impact on society. In doing so, they’ll set the standards for what it means to create a responsible, explainable AI system while at the same time building trust with customers and employees. Finding a moral framework for AI will be a crucial step in the technology’s integration into society. Call it “Citizen AI.”

 

Por Michael Biltz

Fonte: https://venturebeat-com.cdn.ampproject.org/c/s/venturebeat.com/2018/03/31/citizen-ai-teaching-artificial-intelligence-to-act-responsibly/amp/

COMPARTILHAR
VEJA TAMBÉM
Imagem de Pete Linforth por Pixabay
Resolução conjunta estabelece novas normas que se aplicam a investimentos no mercado financeiro e de valores mobiliários
Direito e tecnologia (the-lightwriter/Getty Images)
Tecnologia não vai acabar com os cartórios, mas pode transformá-los profundamente; entenda.
SITE
#37 12/24 Projeto “Bosque E-Xyon” promove reflorestamento em Rio Bonito A empresa de tecnologia jurídica e-Xyon, em parceria
4146-4-3-2
Nícolas Fabeni, CEO da StartLaw, explica como soluções jurídicas automatizadas e baseadas em IA permitem às PMEs acelerarem seus negócios de forma prática e acessível.
EMPRESAS ALIADAS E MANTENEDORAS

Receba nossa Newsletter

Nossas novidades direto em sua caixa de entrada.