[open-development] Ethics and risk in open development - summary of some OK Con discussions

LINDA RAFTREE lindaraftree at gmail.com
Tue Nov 5 06:55:03 UTC 2013


Hi All, 
Here is (finally!) the post on ethics and risk in open development, based on
our discussions in the open development group at OK Fest and bringing in a
link to last weeks OGP summit as well. I hope I've captured everything, but
if not please feel free to comment! It's posted here on the OKF blog
(thanks, Katelyn!) 
http://blog.okfn.org/2013/11/05/ethics-and-risk-in-open-development/

Best,
Linda

Ethics and risk in open development

A core theme that the Open Development track covered at September¹s Open
Knowledge Conference was Ethics and Risk in Open Development
<http://okcon.org/open-development-and-sustainability/session-b/> . There
were more questions than answers in the discussions, summarized below, and
the Open Development working group plans to further examine these issues
over the coming year.

Informed consent and opting in or out
Ethics around Œopt in¹ and Œopt out¹ when working with people in communities
with fewer resources, lower connectivity, and/or less of an understanding
about privacy and data are tricky. Yet project implementers have a
responsibility to work to the best of their ability to ensure that
participants understand what will happen with their data in general, and
what might happen if it is shared openly.
There are some concerns around how these decisions are currently being made
and by whom. Can an NGO make the decision to share or open data from/about
program participants? Is it OK for an NGO to share Œbeneficiary¹ data with
the private sector in return for funding to help make a program
Œsustainable¹? What liabilities might donors or program implementers face in
the future as these issues develop?
Issues related to private vs. public good need further discussion, and there
is no one right answer because concepts and definitions of Œprivate¹ and
Œpublic¹ data change according to context and geography.
Informed participation, informed risk-taking
The Œdo no harm¹ principle is applicable in emergency and conflict
situations, but is it realistic to apply it to activism? There is concern
that organizations implementing programs that rely on newer ICTs and open
data are not ensuring that activists have enough information to make an
informed choice about their involvement. At the same time, assuming that
activists don¹t know enough to decide for themselves can come across as
paternalistic.
As one participant at OK Con commented, ³human rights and accountability
work are about changing power relations. Those threatened by power shifts
are likely to respond with violence and intimidation. If you are trying to
avoid all harm, you will probably not have any impact.² There is also the
concept of transformative change: ³things get worse before they get better.
How do you include that in your prediction of what risks may be involved?
There also may be a perception gap in terms of what different people
consider harm to be. Whose opinion counts and are we listening? Are the
right people involved in the conversations about this?²
A key point is that whomever assumes the risk needs to be involved in
assessing that potential risk and deciding what the actions should be ‹ but
people also need to be fully informed. With new tools coming into play all
the time, can people be truly Œinformed¹ and are outsiders who come in with
new technologies doing a good enough job of facilitating discussions about
possible implications and risk with those who will face the consequences?
Are community members and activists themselves included in risk analysis,
assumption testing, threat modeling and risk mitigation work? Is there a way
to predict the likelihood of harm? For example, can we determine whether
releasing Œx¹ data will likely lead to Œy¹ harm happening? How can
participants, practitioners and program designers get better at identifying
and mitigating risks?
When things get scaryŠ
Even when risk analysis is conducted, it is impossible to predict or foresee
every possible way that a program can go wrong during implementation. Then
the question becomes what to do when you are in the middle of something that
is putting people at risk or leading to extremely negative unintended
consequences. Who can you call for help? What do you do when there is no
mitigation possible and you need to pull the plug on an effort? Who decides
that you¹ve reached that point? This is not an issue that exclusively
affects programs that use open data, but open data may create new risks with
which practitioners, participants and activists have less experience, thus
the need to examine it more closely.
Participants felt that there is not enough honest discussion on this aspect.
There is a pop culture of Œadmitting failure¹ but admitting harm is
different because there is a higher sense of liability and distress. ³When
I¹m really scared shitless about what is happening in a project, what do I
do?² asked one participant at the OK Con discussion sessions. ³When I
realize that opening data up has generated a huge potential risk to people
who are already vulnerable, where do I go for help?² We tend to share our
³cute² failures, not our really dismal ones.
Academia has done some work around research ethics, informed consent, human
subject research and use of Internal Review Boards (IRBs). What aspects of
this can or should be applied to mobile data gathering, crowdsourcing, open
data work and the like? What about when citizens are their own source of
information and they voluntarily share data without a clear understanding of
what happens to the data, or what the possible implications are?
Do we need to think about updating and modernizing the concept of IRBs? A
major issue is that many people who are conducting these kinds of data
collection and sharing activities using new ICTs are unaware of research
ethics and IRBs and don¹t consider what they are doing to be Œresearch¹. How
can we broaden this discussion and engage those who may not be aware of the
need to integrate informed consent, risk analysis and privacy awareness into
their approaches?
The elephant in the room
Despite our good intentions to do better planning and risk management, one
big problem is donors, according to some of the OK Con participants.  Do
donors require enough risk assessment and mitigation planning in their
program proposal designs? Do they allow organizations enough time to develop
a well-thought-out and participatory Theory of Change along with a rigorous
risk assessment together with program participants? Are funding recipients
required to report back on risks and how they played out? As one person put
it, ³talk about failure is currently more like a Œcult of failure¹ and there
is no real learning from it. Systematically we have to report up the chain
on money and results and all the good things happening. and no one up at the
top really wants to know about the bad things. The most interesting learning
doesn¹t get back to the donors or permeate across practitioners. We never
talk about all the work-arounds and backdoor negotiations that make
development work happen. This is a serious systemic issue.²
Greater transparency can actually be a deterrent to talking about some of
these complexities, because ³the last thing donors want is more complexity
as it raises difficult questions.²
Reporting upwards into government representatives in Parliament or Congress
leads to continued aversion to any failures or Œbad news¹. Though funding
recipients are urged to be innovative, they still need to hit numeric
targets so that the international aid budget can be defended in government
spaces. Thus, the message is mixed: ³Make sure you are learning and
recognizing failure, but please don¹t put anything too serious in the final
report.² There is awareness that rigid program planning doesn¹t work and
that we need to be adaptive, yet we are asked to ³put it all into a log
frame and make sure the government aid person can defend it to their
superiors.²
Where to from here?
It was suggested that monitoring and evaluation (M&E) could be used as a
tool for examining some of these issues, but M&E needs to be seen as a
learning component, not only an accountability one. M&E needs to feed into
the choices people are making along the way and linking it in well during
program design may be one way to include a more adaptive and iterative
approach. M&E should force practitioners to ask themselves the right
questions as they design programs and as they assess them throughout
implementation. Theory of Change might help, and an ethics-based approach
could be introduced as well to raise these questions about risk and privacy
and ensure that they are addressed from the start of an initiative.
Practitioners have also expressed the need for additional resources to help
them predict and manage possible risk: case studies, a safe space for
sharing concerns during implementation, people who can help when things go
pear-shaped, a menu of methodologies, a set of principles or questions to
ask during program design, or even an ICT4D Implementation Hotline or a
forum for questions and discussion.
These ethical issues around privacy and risk are not exclusive to Open
Development. Similar issues were raised last week at the Open Government
Partnership Summit 
<http://www.opengovpartnership.org/get-involved/london-summit-2013>
sessions on whistle blowing, privacy, and safeguarding civic space,
especially in light of the Snowden case
<http://tisne.org/2013/10/08/privacy-the-open-government-party-crasher/> .
They were also raised at last year¹s Technology Salon on Participatory
Mapping 
<http://lindaraftree.com/2013/02/11/the-ethics-of-participatory-digital-mapp
ing-with-communities/> .
A number of groups are looking more deeply into this area, including the
Capture the Ocean Project <http://www.capturetheocean.com/> , The Engine
Room <https://socialtechcensus.org/> , IDRC¹s research network
<http://www.opendataresearch.org/content/2013/501/open-data-privacy-discussi
on-notes> , The Open Technology Institute
<http://newamerica.net/sites/newamerica.net/files/policydocs/DialingDownRisk
sFinalPDF.pdf> , Privacy International
<http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2326229> , GSMA
<http://www.gsma.com/publicpolicy/privacy-design-guidelines-for-mobile-appli
cation-development> , those working on ³Big Data
<http://lsr.nellco.org/cgi/viewcontent.cgi?article=1434&context=nyu_plltwp>
,² those in the Internet of Things
<http://gigaom.com/2013/10/03/designing-security-into-the-internet-of-things
/>  space, and others.
I¹m looking forward to further discussion with the Open Development working
group on all of this in the coming months, and will also be putting a little
time into mapping out existing initiatives and identifying gaps when it
comes to these cross-cutting ethics, power, privacy and risk issues in open
development and other ICT-enabled data-heavy initiatives.
Please do share information, projects, research, opinion pieces and more if
you have them!


Linda Raftree
mobile: +1-401-440-5432 <tel:%2B1-401-440-5432>
blog: lindaraftree.com <http://lindaraftree.com/>
twitter: @meowtree <http://twitter.com/meowtree>
skype: lindaraftree
about: http://about.me/lindaraftree


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.okfn.org/pipermail/open-development/attachments/20131105/16ef345b/attachment.html>


More information about the open-development mailing list