Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
How a Bush advisor made sure Facebook had no more RW content removed than LW, whether true or not
Last edited Sat Mar 13, 2021, 06:07 AM - Edit history (1)
or even for anti-vaccine content.
This is a long article about how a Facebook team looking at what AI could do to improve its algorithms on what is served to people is looking at some good things (eg not trying to infer someone's race and then serving them different content based on that - but they're getting sued about that, so they have to fix it), while not stopping the indoctrination of people into extremism - because that would interfere with growth of Facebook usage. About three-quarters of the way through, we find Joel Kaplan - his background:
Joel David Kaplan (born 1969) is an American political advisor and former lobbyist serving as Facebook's vice president of global public policy.[1] Previously, he worked eight years in the George W. Bush administration.[2] After leaving the Bush administration, he was a lobbyist for energy companies.[3]
Within Facebook, Kaplan is seen as a strong conservative voice.[4] He has helped place conservatives in key positions in the company, and advocated for the interests of the right-wing websites Breitbart News and The Daily Caller within the company.[5][3][6] He has successfully advocated for changes in Facebook's algorithm to promote the interests of right-wing publications,[3] and successfully prevented Facebook from closing down Facebook groups that were alleged to have circulated fake news, arguing that doing so would disproportionately target conservatives.
...
After law school, he clerked for Supreme Court Justice Antonin Scalia and Fourth Circuit Court of Appeals Judge J. Michael Luttig.[2] He was an active conservative Democrat during the early-1990s.[8] He registered as a Republican in the late-1990s.[9]
Kaplan worked as a policy advisor on George W. Bush's 2000 presidential campaign, during which he was a participant in the Brooks Brothers riot on November 22, 2000.[10]
From 2001 to 2003 he was special assistant to the president for policy within the White House Chief of Staffs office. Then he served as deputy director of the Office of Management And Budget, serving under Joshua Bolten. While at the OMB, in 2006, Kaplan said the administration would cut the deficit by half by 2009.[11]
In April 2006 he returned to the White House as the White House Deputy Chief of Staff for policy, taking over policy planning duties from Karl Rove as part of a staff shake-up by White House Chief of Staff Josh Bolten. Blake Gottesman was the other Deputy Chief of Staff and focused on operations.[12] He was responsible for the development and implementation of the Administrations policy agenda.
While in the Bush administration, Kaplan was seen as very close to Bolten.[13]
https://en.wikipedia.org/wiki/Joel_Kaplan
Within Facebook, Kaplan is seen as a strong conservative voice.[4] He has helped place conservatives in key positions in the company, and advocated for the interests of the right-wing websites Breitbart News and The Daily Caller within the company.[5][3][6] He has successfully advocated for changes in Facebook's algorithm to promote the interests of right-wing publications,[3] and successfully prevented Facebook from closing down Facebook groups that were alleged to have circulated fake news, arguing that doing so would disproportionately target conservatives.
...
After law school, he clerked for Supreme Court Justice Antonin Scalia and Fourth Circuit Court of Appeals Judge J. Michael Luttig.[2] He was an active conservative Democrat during the early-1990s.[8] He registered as a Republican in the late-1990s.[9]
Kaplan worked as a policy advisor on George W. Bush's 2000 presidential campaign, during which he was a participant in the Brooks Brothers riot on November 22, 2000.[10]
From 2001 to 2003 he was special assistant to the president for policy within the White House Chief of Staffs office. Then he served as deputy director of the Office of Management And Budget, serving under Joshua Bolten. While at the OMB, in 2006, Kaplan said the administration would cut the deficit by half by 2009.[11]
In April 2006 he returned to the White House as the White House Deputy Chief of Staff for policy, taking over policy planning duties from Karl Rove as part of a staff shake-up by White House Chief of Staff Josh Bolten. Blake Gottesman was the other Deputy Chief of Staff and focused on operations.[12] He was responsible for the development and implementation of the Administrations policy agenda.
While in the Bush administration, Kaplan was seen as very close to Bolten.[13]
https://en.wikipedia.org/wiki/Joel_Kaplan
From the article:
In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensureamong other thingsthat they didnt disproportionately penalize conservatives.
...
The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, fairness does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.
But members of Kaplans team followed exactly the opposite approach: they took fairness to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. Theres no point, then, the researcher says. A model modified in that way would have literally no impact on the actual problem of misinformation.
This happened countless other timesand not just for content moderation. In 2020, the Washington Post reported that Kaplans team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebooks recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebooks data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.
https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/
...
The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, fairness does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.
But members of Kaplans team followed exactly the opposite approach: they took fairness to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. Theres no point, then, the researcher says. A model modified in that way would have literally no impact on the actual problem of misinformation.
This happened countless other timesand not just for content moderation. In 2020, the Washington Post reported that Kaplans team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebooks recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebooks data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.
https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/
InfoView thread info, including edit history
TrashPut this thread in your Trash Can (My DU » Trash Can)
BookmarkAdd this thread to your Bookmarks (My DU » Bookmarks)
2 replies, 1543 views
ShareGet links to this post and/or share on social media
AlertAlert this post for a rule violation
PowersThere are no powers you can use on this post
EditCannot edit other people's posts
ReplyReply to this post
EditCannot edit other people's posts
Rec (13)
ReplyReply to this post
2 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
How a Bush advisor made sure Facebook had no more RW content removed than LW, whether true or not (Original Post)
muriel_volestrangler
Mar 2021
OP
pfitz59
(10,302 posts)1. I've been in FB jail for a month
this may explain why...
Kid Berwyn
(14,795 posts)2. One of THESE assholes who helped steal Florida in 2000?
They were there at the eh suggestion of Roger Stone.