Very interesting. Thanks. Essentially the same issues apply to any knowledge ecosystem. Academics rely on citation counts (a horribly crude measure with lots of bugs in it) and recommendations from academic sources. Important ideas may not be noticed - e.g. Mendel's genetics but there must be many other ideas that were ignored but might have benefitted humanity if they had been picked up. Before the web, the news people got would depend largely on the whims of newspaper editors driven probably largely by anticipated profits - a less formal and more transparent equivalent of Facebook and Google's algorithms. How people's attention is controlled is a fascinating and important question which has far wider relevance than the secret algorithms in tech products.
Focusing on one company or a small group of actors will not fix a systemic problem. The internet grew up without a corresponding economic protocol stack adjacent to its 4 layer stack. Aka settlements. These act as incentives and disincentives across and between networks, applications and users.
The definitions of "inaccurate information" or "misinformation" are themselves potential sources of disinformation, because the political biases of Silicon Valley culture at large militates in favor of certain speech that is deemed to be true when it is either false or highly misleading.
The claim that the 2020 election was the cleanest and most scrutinized ever is no more true than the claim that it was marred by widespread fraud. The fact is that many jurisdictions produced outcomes which were tainted by scandal, most of which were never (and could never have been) resolved by thorough investigations. The simple reason for that is that relevant data and logs were either never kept or were deleted long before they should have been. Many, if not most, of the cases thrown out of court were dismissed either due to lack of standing or due to a deficiency of evidence supporting or refuting the claims. The legitimacy of that election after the fact is completely unknowable with the information we have available today. We don't know whether there were no ballots cast by dead people or whether there were millions cast, because virtually no jurisdictions in the country reconcile ballot rolls with death certificates. We don't know how many people voted in multiple jurisdictions or whether those who voted in their homes had ballots cast on their behalves by others in the communities where their vacation homes were located. We don't know how many ballots were cast by the same individuals in a single jurisdiction using same-day registration which were then destroyed before being brought to the board of elections to be reconciled into a comprehensive master list. We don't know how many ballots were put into drop-boxes by ballot harvesters with no reasonable audit trail for those ballots (or for the registrations behind them). It's quite possible that the election tally was correct, but it's also quite possible that it was not. Public contentions in either direction on that subject could be correct or they could be totally wrong, and it is wrong to call either of them false or to categorize either of them as disinformation.
Daphne Keller, for example, uses misleading information when she described the person who shot two people in Kenosha as a violent white supremacist. The individual was actually a teenager who was acquitted of all charges based on the complete video record of the event (contrasted with the edited video which had generate massive outrage and precipitated the initial filing of charges). The young man will be forever scarred by the taking of two lives, but a jury concluded that he was in legitimate fear for his own life and was not guilty or murder or manslaughter.
It is easy to place tidy boundaries between random points on the continuum from fuzzy transparency to full accountability when we're talking about algorithms. While that can be clear and logical in a mathematically simple analysis, it gets much more complex when we deal with more significant issues of transparency and accountability. The issues become much more important to real people when we look at how they relate to the behavior of federal government employees who interact with members of the general public. When a government employee becomes involved in matters with significant impact on the economy, should we demand that their personal investments be insulated from their ability to instruct their investment manager whether to buy or sell certain securities? If the government employee circumvents those barriers, should that be grounds for dismissal? And what about issues where an agency employee is accused of personally targeting an individual (or a group)?
Transparency and accountability are much broader constructs (and bigger problems in our society) than a simple discussion of algorithms, and they merit much greater consideration in our public discourse at large. Maybe the discussion of how transparency and accountability should be taken into consideration when we look at social media algorithms can become the stimulus for broadening the scope of the discussion to the point at which we realize that whether we're looking at social media, Silicon Valley, news media, corporate governance or government at large, our whole society needs a renaissance of transparency and accountability in order to restore our culture to its most aspirational ideals.
Very interesting. Thanks. Essentially the same issues apply to any knowledge ecosystem. Academics rely on citation counts (a horribly crude measure with lots of bugs in it) and recommendations from academic sources. Important ideas may not be noticed - e.g. Mendel's genetics but there must be many other ideas that were ignored but might have benefitted humanity if they had been picked up. Before the web, the news people got would depend largely on the whims of newspaper editors driven probably largely by anticipated profits - a less formal and more transparent equivalent of Facebook and Google's algorithms. How people's attention is controlled is a fascinating and important question which has far wider relevance than the secret algorithms in tech products.
Focusing on one company or a small group of actors will not fix a systemic problem. The internet grew up without a corresponding economic protocol stack adjacent to its 4 layer stack. Aka settlements. These act as incentives and disincentives across and between networks, applications and users.
The definitions of "inaccurate information" or "misinformation" are themselves potential sources of disinformation, because the political biases of Silicon Valley culture at large militates in favor of certain speech that is deemed to be true when it is either false or highly misleading.
The claim that the 2020 election was the cleanest and most scrutinized ever is no more true than the claim that it was marred by widespread fraud. The fact is that many jurisdictions produced outcomes which were tainted by scandal, most of which were never (and could never have been) resolved by thorough investigations. The simple reason for that is that relevant data and logs were either never kept or were deleted long before they should have been. Many, if not most, of the cases thrown out of court were dismissed either due to lack of standing or due to a deficiency of evidence supporting or refuting the claims. The legitimacy of that election after the fact is completely unknowable with the information we have available today. We don't know whether there were no ballots cast by dead people or whether there were millions cast, because virtually no jurisdictions in the country reconcile ballot rolls with death certificates. We don't know how many people voted in multiple jurisdictions or whether those who voted in their homes had ballots cast on their behalves by others in the communities where their vacation homes were located. We don't know how many ballots were cast by the same individuals in a single jurisdiction using same-day registration which were then destroyed before being brought to the board of elections to be reconciled into a comprehensive master list. We don't know how many ballots were put into drop-boxes by ballot harvesters with no reasonable audit trail for those ballots (or for the registrations behind them). It's quite possible that the election tally was correct, but it's also quite possible that it was not. Public contentions in either direction on that subject could be correct or they could be totally wrong, and it is wrong to call either of them false or to categorize either of them as disinformation.
Daphne Keller, for example, uses misleading information when she described the person who shot two people in Kenosha as a violent white supremacist. The individual was actually a teenager who was acquitted of all charges based on the complete video record of the event (contrasted with the edited video which had generate massive outrage and precipitated the initial filing of charges). The young man will be forever scarred by the taking of two lives, but a jury concluded that he was in legitimate fear for his own life and was not guilty or murder or manslaughter.
It is easy to place tidy boundaries between random points on the continuum from fuzzy transparency to full accountability when we're talking about algorithms. While that can be clear and logical in a mathematically simple analysis, it gets much more complex when we deal with more significant issues of transparency and accountability. The issues become much more important to real people when we look at how they relate to the behavior of federal government employees who interact with members of the general public. When a government employee becomes involved in matters with significant impact on the economy, should we demand that their personal investments be insulated from their ability to instruct their investment manager whether to buy or sell certain securities? If the government employee circumvents those barriers, should that be grounds for dismissal? And what about issues where an agency employee is accused of personally targeting an individual (or a group)?
Transparency and accountability are much broader constructs (and bigger problems in our society) than a simple discussion of algorithms, and they merit much greater consideration in our public discourse at large. Maybe the discussion of how transparency and accountability should be taken into consideration when we look at social media algorithms can become the stimulus for broadening the scope of the discussion to the point at which we realize that whether we're looking at social media, Silicon Valley, news media, corporate governance or government at large, our whole society needs a renaissance of transparency and accountability in order to restore our culture to its most aspirational ideals.