Home > NewsRelease > Washington AI Network Hosts Meta’s Polina Zvyagina, Others
Text
Washington AI Network Hosts Meta’s Polina Zvyagina, Others
From:
The Georgetowner Newspaper -- Local Georgetown News The Georgetowner Newspaper -- Local Georgetown News
For Immediate Release:
Dateline: Georgetown, DC
Monday, July 15, 2024

 

Tammy Haddad created the Washington AI Network to bring artificial intelligence innovators and policymakers together in person to facilitate exchanges that just wouldn’t happen online.  From the looks of it, it is working.

The Washington AI Network gathering on July 8 was presented in partnership with the AI Alliance, a “collaborative network of more than 100 companies, startups, universities, research institutions, government organizations and non-profit foundations.” It supports an open AI ecosystem where innovation is tempered by trust and fairness.

The biggest reveal of the event was the mechanics for eliminating bias in AI by Yale’s Annie Hartley. The concept is “unlearning.” Just as AI can learn, it can unlearn, she explained. She described the process for AI to “unlearn” connections it has made and explained how that would happen. For example, perhaps two words are associated in the AI in a biased fashion, like Ethnic Group One and Descriptive Word. That association can be changed because it is expressed by a number within the system—AI users aren’t stuck with it.

“It is possible to identify problematic elements in an AI model and ‘unlearn’ that association. Since AI is all based on numbers, it is possible to change the numbers that connect those words,” she said.

More than that, open source AI can compare biases between two places, like South Korea and the United States.

Known for gathering AI thought leaders, the Washington AI Network hosted guests from Meta, IBM, Yale and SeedAI to talk about OpenAI and “the path forward.”

Haddad’s curated assembly was fertile ground for ideas and solutions. Silicon Valley, academia and Capitol Hill staffers mixed and chatted in the House at 1229, while noshing on an array of fried treats. I had a very interesting discussion about assigning liability to AI harms and spoke to some young women about how AI augments their learning.

The mingling before the presentations yielded mixed experiences.  Surely, representatives of the most powerful corporations in the world can expect informed questions on the value of their Open AI projects as compared to the risks and harms. So when IBM’s Vice President of Emerging Technology Advocacy was asked about the problematic child image manufacturing that was enabled by open-source AI, one would expect something other than her running from the room while chanting “there are bad guys everywhere.” You would be wrong. When she stood in front of the event banner for a group photo, I was tempted to sidle up to her to see how fast she could sprint. Prior to that, she denied even knowing what CSAM (child sexual abuse material) was.

By far, the most impressive AI display was Hartley of the Yale Institute for Global Health who is working with students to create an AI system as medical support to low-resource regions with little or no doctors or even nurses. She has devised a methodology where an AI system is guided by doctor feedback to improve its answers.  More than that, the AI system itself is hyper-localized to a specific hospital or region.  Once trained, the AI is owned by that medical group.

Also on display was the latest Ray Ban Meta Smart Glasses. Trying them was an interesting experience. Music played in your ears when you wore them, and you could hear a voice read a text to you that the glasses translated from another language. If your aesthetic is Groucho Marx, then these glasses are for you.

The speakers for the event were Polina Zvyagina, Meta’s Director of AI Policy & Governance; Austin Carson, SeedAI’s founder and President; Daniela Combe, IBM’s Vice President of Emerging Tech Advocacy; and Dr. Annie Hartley, Yale’s Assistant Professor of Biomedical Informatics & Data Science.

Their presentations varied from the visionary to the fanciful. Meta’s Zvyagina gave moving words about the importance of inter-personal connections, and she probably believes it. The problem with techno-optimists is someone always needs to remind them of the downside. When asked about the effect of commodifying those relationships to sell advertising, there was no real answer.

SeedAI’s Carson raised concerns about the data fed into AI software that created the answers it gives. He pointed out that most people are not represented in creating AI models, resulting in problems from misidentification to bias. His organization promotes “a representative, new, diverse generation of the AI workforce,” according to SeedAI’s website.

Our friend from IBM — who didn’t welcome conversations about the very real risks of an open AI ecosystem — assured everyone present that IBM’s business-oriented client base values transparency and trust, which is why IBM supports OpenAI. Indeed.

A quick review of “weights,” as used in AI, was given. All the panelists agreed that Wikipedia is one of the best representations of human intelligence and cooperation. Hartley stated that randomized control trials must happen in the field to test to acquire the necessary context and safety for AI.

Washington AI Network hosts an interesting event with informed and motivated people in attendance. Attire was D.C. after-work and a few jeans with interesting t-shirts. If invited, feel free to wear a dress (not a sundress), suit, pants and a blazer, or jeans and a “I am a tech person” t-shirt. No law firms.

tags
Pickup Short URL to Share
News Media Interview Contact
Name: Sonya Bernhardt
Group: The Georgetowner Newspaper
Dateline: Georgetown, DC United States
Direct Phone: 202-338-4833
Jump To The Georgetowner Newspaper -- Local Georgetown News Jump To The Georgetowner Newspaper -- Local Georgetown News
Contact Click to Contact
Other experts on these topics