In Defense of Anagorism

political economy in the non-market, non-state sector

Tag: information asymmetry

  • Is there #OpenData outside #OpenGov?

    Is there a public domain separate from the public sector? Is there any interest in applying pressure to the private sector for increased transparency and a less proprietary approach to data?

    I ask this because when I check in on #OpenData as a Twitter hashtag, approximately 100% of the tweets describe some kind of activity that is being undertaken in the “open government” arena, or something about some local or national government somewhere releasing some dataset into the public record.

    A tweet by Josie and Lori (@peced): Is paucity of #OpenData projects that're not also #OpenGov abt gov as a path of less resistance, or deference to legitimacy of trade secrets?

    I ask this because I want to believe that the open data movement is currently concentrating on public sector data simply because that is the path of least resistance or, in practical terms, that is where the available data are, so those who have queries and algorithms to try out simply go where the data are. Another, more cynical-seeming possibility that occurs to me is that the open data movement is populated largely by people who see a place for proprietary data in the world, or see treatment by business of actionable data as proprietary, as natural, legitimate and perhaps inevitable. “Information wants to be valuable” and all that. Another possibility that has occurred to me is that open data is the new open source, and some percentage of the open data activity is essentially portfolio work undertaken in hopes of making a career of data science, with the candid understanding that information wants to be valuable, and frank acceptance that professional careers in data will inevitably mean non-disclosure agreements, non-compete clauses and other data siloing strategies.

    If there is some kind of career path from open data activism to information asymmetry-fueled careerism or entrepreneurship, then it is right and proper to view those who successfully make that transition as rent seekers, very much in the tradition of companies that stake intellectual property claims on things that are fairly direct fruits of public investment in basic research. If this is what’s happening, don’t be surprised if at some point these “activists” act to cut out the lower rungs on the ladder of their own ascent, perhaps by advocating privatization of those government agencies that supplied the open data they cut their teeth on, or alternatively, advocating “running government like a business” which of course includes recognizing data (which wants to be valuable) as a valuable asset and hoarding it (or at least pricing access to it) as a private entity would.

    The main question on my mind is: Are there “rogue elements” within the open data movement that take a decidedly adversarial view toward commercial entities when it comes to matters of applied information asymmetry? as a “brand” has become so absolutely synonymous with #OpenGov that perhaps I should be looking outside the self-identified open data movement for the “proprietary is a dirty word” kind of irreverence and spunk that I’m looking for.

  • Public data infrastructure?

    This is prompted by Who Owns Big Data? by Michael Nielsen at the “OpenMind” website. I was clued in to that post by a re-post of part of that article by Michel Bauwens at the P2P Foundation Blog. The OpenMind website doesn’t accept comments, and the P2P blog either doesn’t accept comments this long, or doesn’t accept comments edited offline and pasted to the comment form (it actually came back with an error message “you’re posting too fast, slow down”). So again, the blogosphere is the voice of the little people; the antidote to the we-talk-you-listen model of institutions, and those working within the system.

    The “OpenMind” website where this content came from seems to be staffed by NGO types (at least one, it seems, is affiliated with UNESCO), so of course they’re looking for new roles for the large-scale nonprofit sector. They also, to a person, have academic pedigrees. Their world is one that is utterly inaccessible to my working-class born-and-raised self, thanks to the usual opportunity hoarding and the like. I loved open source back in the glorious nineties precisely because it was the age of hobby projects. Plus, the OpenMind website has a look and feel that seem to me, how shall I put it–“polished.” “Professional” production values. You know what I mean, I’m sure. The lack of a commenting facility also seems to say something, um, institutional.

    Academia, in spite of its elitist tendencies, was a big part of the open source scene of the glorious nineties; maybe most of it. Many open source developers had a day job that was staff-not-faculty at some university. If this job paid enough to live on, and wasn’t as draconian about non-disclosure, non-compete and EDS-style training cost clawbacks as was private industry, then right there you have all the ingredients one needs in order to have the luxury of contributing to open source. There also seemed to be more activity of that sort in Europe than in North America, but I’m not sure. Before 1993 or so, .com domains were somewhat exotic against a backdrop of .edu domains and their overseas equivalents.

    The surrender of open source to either monetized linux distros or free-will offerings of corporations is a mixed curse (the mirror image of a mixed blessing) as (1) at least there is still code in the public domain and (2) that model does have the virtue of scaling to larger and more sophisticated applications. I still think the reason it happened in the first place is because of austerity–an ideological commitment to a leaner public sector, especially when it comes to paid jobs. Those university computer lab staffers and (“supported”) ten-year-track graduate students of the 1990s had creative luxuries that are almost unimaginable today.

    Wikipedia happened because Jimbo Wales was independently wealthy. OpenStreetMap is “in partnership with MapQuest” and so, like the monetized Linux distros, has become at least semi-commercial, which is probably better than simply folding or something, but may well be a step away from rather than toward the public data infrastructure we all dream of.

    Some 15 years ago I first proposed Pubwan. At the time, I was thinking “public wide area network,” but my thinking on this evolved into more of a public distributed database. Now that “Big Data” is the rage, I’m starting to think maybe I was on the right path to begin with, focusing on hardware. It becomes more clear every year that overwhelming informational advantage can be decisive advantage, and organizations (let alone loose federations) that are not in a position to run server farms and/or large-scale network infrastructure, probably have no potential to play an active role in humanity’s informational future. This is sad, as I’m only a little more trusting of Big Philanthropy than I am of Big Business.

    Back in the day, there was something called Fidonet that was pretty purely, decentralized, hobbyist, non-monetized, whatever else you would like. The catch was that if you whittled it all the way down to the hardware level, the platform it ran on was the telephone system. As far as I know, there is no precedent for assets of that type to be managed by non-profit organizations. It’s either public monopoly or private monopoly.

    Consider the following two statements from Nielsen’s article:

    In general, I am all for for-profit companies bringing technologies to market. However, in the case of a public data infrastructure, there are special circumstances which make not-for-profits preferable.

    But it’s difficult to believe that having the government provide a public data infrastructure more broadly would be a good idea.

    It’s almost as if the First Commandment underlying the public communications of organizations such as OpenMind is “don’t buck the neoliberal consensus.” This is the kind of kabuki I have come to expect from the kind of people who are established in careers…

    Then there is the question of what we would like a public data infrastructure to accomplish. I would propose the main mission would be to act as a countervailing force to commercial big data practices. A strategy to replace information asymmetry with information parity. Work against the fact that information doesn’t want to be free and make no bones about it. Fight the tendency of commercial websites to dispense single data points by offering members of the public the ability do ad-hoc queries against large datasets. Also, put personal devices in people’s hands that feed behavioral and other data first to their users, and afterward, assuming the permission of their users, some subset of that data stream might go directly into the public domain. With any luck, it will find its way into social science research; a more worthy cause, in my opinion, than market research.

  • That which is for sale is that which is not free

    In Why buy the cow if the milk is free? at , UnboundID asks and answers:

    What would make data sharing acceptable to consumers?

    1. Being asked what you’d be willing to share
    2. Being given meaningful value for the use of the data
    3. A guarantee that the data will be kept secure
    4. The ability to update data, or revoke access to it
    5. Knowledge of who the data will be shared with

    I speak for only one consumer. What would make data sharing acceptable to this consumer?

    1. Having a client-side record of every outbound data transfer in queryable form
    2. Having packet-level access to network traffic in/out of my devices
    3. The ability to mark individual table/view columns as
      1. private, meaning not in circulation,
      2. shareable with the general public as nonproprietary data, or
      3. shareable as proprietary (monetizable) data with a list of named data partners.

    The milk metaphor is apropos. The key to monetizing your projects is being willing to milk them.milch

  • When others know you better than you know yourself, I call that power.

    CBC asks:

    How much data privacy can you expect to have?

    I’d like to see a shift to a debate which treats as the relevant question: How much information asymmetry can you expect not to have?

    Or alternatively: How much machine-readable/queryable/”mineable” data can a consumer or end-user expect to be on the receiving end of? In the case of a $ell phone, this might mean access to raw dumps of one’s own GPS logs, call logs, raw network traffic feeds up & down, etc. Analogously, raw feeds for “smart” meters from Big Utility, etc.