5 questions for Jenny Toomey
Welcome back to our regular Friday feature, The Future in Five Questions. This week we interview Jenny Toomey, a longtime advocate for internet transparency and open architecture who now runs the Ford Foundation’s “Catalyst Fund” — a $50 million fund to foster technology to serve the public interest. Toomey is also a veteran of the D.C. indie rock scene and former music journalist. Responses have been edited for length and clarity.
What’s one underrated big idea?
That it’s not all about the technology, so much as it’s about the technologists. We’re in an age of tech solutionism where we believe that tech is going to solve all of our problems, but even a technology that solves big problems causes other problems. So investing in those technologies without actually building a scaffolding of differently trained, brilliant technologists around it means that we keep making the same errors over and over and over again.
We need technologists who have different backgrounds, whether that’s journalism, or policy expertise, or law — people who are thinking about tech not just as solving a specific problem, but in the context of the broader social need.
What’s a technology you think is overhyped?
The concept of tech solutionism itself.
I’m not a Luddite. I’m talking to you on this computer right now, I’m going to do all my work on it later today, and then I’ll play guitar into my computer. I love technology, there’s nothing wrong with it, it’s going to be part of everything. But the idea that tech itself is the solution to society’s problems is ridiculous.
What book most shaped your conception of the future?
I’ll give you three. One is Lessig’s “Code,” which was the first time I actually realized that the code is actually the vessel that determines what can happen in it. Lessig said “code is law,” and Professor Latanya Sweeney at Harvard, who worked at the FTC, says “code is policy” — that if you don’t have the right code in place, it determines what can happen. It really helped me understand that A, it’s a system, and B, that it doesn’t necessarily need to be that way.
Cathy O’Neil’s book “Weapons of Math Destruction” is the best book out there to explain what can go wrong if you have a real deep belief that AI will solve everything. AI is a black box using data where you don’t know where it came from, and there’s no accountability measure to ask about why this black box told me I can’t get this apartment, or I’m not going to be up for this job, or I’m getting a lower grade by the software that’s assessing me as an educator. She explains very, very clearly the concrete harms of being too impressed with unaccountable AI systems.
The last is “Power to the Public: The Promise of Public Interest Technology” by Tara Dawson McGuinness and Hana Schank. It’s telling the story of these incredible public interest technologists going into federal, state and local governments, and working with federal agencies and the most impacted populations to determine what kinds of tech solutions will meaningfully make things better.
What could government be doing regarding tech that it isn’t?
They need to be recruiting differently, to ensure that there are people with technical backgrounds and the right context from diverse backgrounds going into different government agencies.
For too long government has contracted with outside firms and had technology at a degree of separation, where they delegate the design to people outside government. They need to bring technologists in, and not just for implementation — every single policy has a technology component. They will spend 99 percent of their time fighting to pass a law, or get a regulation in place, and never think about what the public will experience, which is going to be completely intermediated by technology.
What has surprised you most this year?
Two things. One thing is that I’m seeing momentum: I’m seeing things like the U.S.’ digital response during the pandemic, where all of these technologists volunteered to help figure out how people could get their vaccines, and to help governments figure out how they can get their election information and census information out. It’s amazing to see these technologists say, “I have this skill set, and I want to use it in the public interest in this way.” I love seeing that happen.
The other thing is that even in an environment where every day we wake up and see front page stories about the harms of disinformation, surveillance and biased AI there is still the predilection to believe that tech alone will solve these problems.
A Biden administration official announced yesterday that the U.S. and EU are teaming up to build a “road map” aimed at keeping AI systems from being used as tools for surveillance or repression by China.
As POLITICO’s Doug Palmer reported for Pro subscribers, Commerce Undersecretary Marisa Lago called the plan a “mutual priority that is going to grow in scope” at a Chamber of Commerce event. Lago said the full details of the road map will be announced at the U.S.-EU Trade and Technology Council in Washington next month.
Exactly what the shared approach is will be revealing. The EU continues to work on its AI Act, putting in place stringent statutory requirements for the use and development of AI systems, whereas the Biden administration has laid out its own map for an “AI Bill of Rights” that incentivizes industry with favorable treatment for following its principles.
When Elon Musk took over Twitter, one of his top priorities was to eradicate bots from the platform.
Instead, his new verification policies might be opening the door to a novel, and very creepy, form of AI-assisted spam on the platform. As a pseudonymous Twitter user who purports to be a data scientist pointed out yesterday, in the first hours of Twitter’s new verification policy a slew of verified accounts popped up that all sport avatars generated by a “general adversarial network,” the spooky face-generating tool used by websites like thispersondoesnotexist.com.
According to the post, the Twitter accounts of “Independent journalist” “Oliver Vanderstraten” or bicycling activist “Rich Seager” might appear to be “official” accounts with smiling human faces at first glance—but in fact are not real people at all.
Each one of the 13 accounts pointed out in the viral tweet are now suspended — and Twitter has now suspended the ability to pay $8 for a blue checkmark. Musk himself might have warned that Twitter will “do lots of dumb things in coming months,” but cases like this are evidence that it might be what Twitter doesn’t do, or consider, that could lead to mass confusion.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
(function() d.body).appendChild(s);
)();