![]() The main objective of this report is to provide a descriptive analysis of the current practices and trends of corporate governance of State-owned Enterprises (SOEs) in several Latin-American countries. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. But contrary to conventional wisdom, they were not the biggest obstacle. Over-broad assertions of trade secrecy were a problem. ![]() We found that in almost every case, it wasn’t provided. To do this work, we identified what meaningful “algorithmic transparency” entails. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. To see just how impenetrable the resulting " black box " algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |