Image
Top
Navigation

Borderline

Borderline is a mobile research and creation tool that uses sound to create new understandings of place.

Using algorithms trained to identify ~100 common sounds, the project enables users to identify and annotate invisible boundaries that affect social and economic mobility, and export their data in an open, accessible format. The tagged recordings automatically play when you visit their original locations, creating interactive acoustic footprint that changes as you move through them. 

Soundwalk with beta version of Borderline Soundwalk, Struer, Denmark, presented as part of RE:SOUND 2019 (Aalborg University & RELATE – Research Laboratory for Art and Technology.

The project evolved from a series of soundwalks through gentrifying areas of cities in 2014 and 2015. Soundwalks following trajectories of urban data have taken place in Toronto (2018), Vancouver, Struer, and Providence (2019), Pamplona (2021) and Port Hope (2022).

This research is organized around two main objectives:

  1. What are the relationships between sounds in the urban environment and socioeconomic indicators, such as demographics, housing patterns, gentrification and displacement? and;
  2. How might mobile technologies be used to create more inclusive methods of capturing, annotating and mapping sound in our cities?

Using research methods that draw from sound studies, critical mapmaking, participatory action research and artistic intervention, the project is designed to create new forms of citizen engagement by listening through and centering the ears on the ground. Our work is guided by our core principles, which include taking an intersectional approach to understanding data, to practice transparency in our data collection, mapping and metadata; to prioritize accessibility in our design decisions and the language that we use to describe this work, and to create a supportive working and learning environment for students.

Project Team

Credits

Borderline was made possible through an Insight Development Grant from the Social Sciences and Humanities Research Council of Canada. Further research is funded by a Government of Canada Early Researcher Award. 

Training data for neural networks courtesy of the Freesound Annotator