by Margot Dick
No one expects to become a victim of flooding, but it pays to be prepared for the worst. Greg Ewing, a PhD student at IIHR is working with IIHR Researcher Ibrahim Demir to build new ways for communities to prepare for and respond to the next disaster.
With that in mind, Ewing designed a framework called the Water Ethics Web Engine, or (WE)². The goal of the framework is to power decision-making tools that can be used by community members, leaders, and emergency management coordinators to make decisions that best reflect the desires of the community.
First, individuals are invited to provide their anonymous opinions on what is better for a community through an online questionnaire. Questions range from economic to ethical, asking where sandbags should be deployed, which areas of a town should be allowed to flood, and which buildings should be prioritized. All questions have two answers, A or B, and users select which answer they think is better for the community. The answers are then aggregated for analysis.
Community leaders can use the information and the decision models gathered through the framework to facilitate conversations about ethics and values important to their residents. The model highlights areas of disagreement or importance before a flooding event, meaning communities are better prepared when disaster hits.
Next, the compiled data can be set aside for a time when a decision is necessary. The information can then, theoretically, be used to make decisions that support stakeholders’ preferences and desires.
Though the current focus of the (WE)² is water systems, there is room to evolve. Ewing says there are many other possible implementations of the framework beyond disaster response, including as an educational tool to explain the processes behind artificial intelligence. AI systems are built using data and information from past outcomes. When the information input relies on data that was inherently biased, the program will continue to promote those same types of decisions. For example, when a company has a history of hiring white, male candidates, a program will predict that white men are more likely to succeed at the company and other applicants will not be considered. Frameworks such as (WE)² add a human element to automated decision-making tools in an effort to combat the biases of AI systems and correct for them moving forward.
The (WE)² framework offers a process to investigate decision support systems to avoid unintentional bias.
Ewing is excited to see where the framework goes and encourages any parties interested in its application to reach out to him with questions or comments.
The framework can be found on GitHub, a site that hosts code to be distributed. Anyone interested in the (WE)² framework can download the code and see how it works for themselves at the link below:
Ewing hopes the framework can be used to make decisions and streamline disaster response in Iowa communities.
He says, “The long-term goal is to be able to incorporate it with your sensor feeds, your data feeds, your forecasting, to provide a confident recommendation to those leading disasters.”
He hopes the decision-making model can give disaster response coordinators confidence that the decisions they make reflect the desires and values of their community. Disaster response is not a one-size-fits-all solution, and individual communities can use the framework to determine how they want to respond.