This post is based on a manuscript in progress with colleagues from the RAPP Lab network. It builds directly on the August 2023 latency measurements. That post covered what the numbers look like. This one covers why getting to those numbers was the easy part.
The Setup
After spending two and a half years measuring latency across six European research-network links, I can tell you that the audio numbers are achievable. 7.5 to 22.5 ms one-way across Prague to Tallinn, LoLa and MVTP both working, musicians playing together across national borders in real time. Technically, that story has a satisfying ending.
What the measurement paper does not capture is everything that had to be true institutionally before we could run a single test. The firewall negotiations. The repeated calibration sessions. The network configuration that nobody outside our small group knew how to reproduce when someone left. The grant that funded the equipment but not the person who kept it running. The performance session that nearly collapsed because a campus IT update had silently changed a routing rule three days prior.
The technical infrastructure worked. The institutional infrastructure around it was precarious in ways that only became visible when something broke.
This is what the follow-up paper tries to name.
What Is a Digital Music Lab, Actually?
The term gets applied to everything from a laptop cart in a classroom to IRCAM Paris. We use it to mean something specific: a Digital Music Lab (DML) is a hybrid environment where space, equipment, software, personnel and organisational routines are configured together to support iterative artistic experimentation, research-led learning and outward-facing engagement.
The key word is configured together. A room full of excellent hardware is not a DML any more than a library is just a building full of books. What makes either work is an invisible layer of social organisation: access policies, shared norms, maintained documentation, people who know what to do when something breaks.
We borrow a concept from infrastructure studies to describe this: performative infrastructure. The concept draws on Star and Ruhleder (1996), and it captures something precise — that infrastructure does not merely enable activity, it also shapes what kinds of activity are possible in the first place. The decision to use LoLa rather than Zoom is not just a technical choice; it is an institutional statement about what kind of musical interaction this space is designed to support, and about who is expected to use it.
This framing matters because it shifts the design question. You are not asking “what equipment should we buy?” You are asking “what kind of practice do we want to make possible, and what organisational conditions make that practice sustainable?”
Four Things That Actually Determine Whether a DML Survives
1. Flexible by design, not by accident
Resilient labs resist the temptation to optimise for one use case. The systems that have lasted — Stanford CCRMA is the obvious reference point, nearly five decades and counting — tend to separate a stable core (networking, routing, authentication, documentation) from a more rapidly changing layer of creative tools and workflows. The core does not change when you switch DAWs or update your streaming platform. The tools on top of it can.
This sounds obvious. In practice it means being deliberate about which dependencies you are willing to accept. A lab built on a single vendor ecosystem can offer tight integration, but it creates a single point of failure and a maintenance contract you will be negotiating forever. A lab built on open protocols and well-documented configurations is more work to set up and less work to sustain.
The other thing flexibility buys is pedagogical range. The same environment can host an introductory workshop, an advanced performance-research project and a public-facing concert without requiring incompatible reconfiguration for each. This is not a luxury. It is what makes a DML worth the overhead compared to just booking a studio.
2. Governance that survives personnel turnover
The single most dangerous sentence in any DML is: “We can ask [person] — they know how it works.”
Every lab has that person. The one who configured the routing. The one who knows which cable does what. The one who has the institutional memory of every workaround and edge case. When that person moves on, the lab frequently becomes unreliable within six months and functionally inaccessible within a year — even if all the equipment is still there. We call these zombie infrastructures: technically present, functionally dead.
The corrective is not to document everything (though that helps). It is to design governance so that knowledge is distributed by default. Distributed stewardship roles — student assistants, rotating committees, peer mentors — mean that multiple people develop operational knowledge as a matter of routine, not as emergency knowledge transfer when someone announces they are leaving.
Technical staff need to be treated as co-creators in this model, not as service providers. When networked performance is framed as peripheral experimentation rather than core infrastructure, maintenance becomes precarious and invisible. When it is framed as core, collaboration between artistic and technical roles becomes institutional routine.
3. Maintenance as a budget line, not an afterthought
Here is the infrastructure paradox: systems are valued for enabling novelty, but they require boring, recurring investment to remain usable. Project funding solves the novelty problem. It almost never solves the maintenance problem.
The costs that make a lab reliable are not one-off:
- Staff continuity (or explicit knowledge transfer when staff change)
- Documentation that is actively maintained, not written once and forgotten
- Renewal cycles for hardware and software that actually match the pace of change in the underlying ecosystem
- User support during active sessions, not just during setup
At HfMT Köln, the operational work that dominated actual implementation time was none of the things that appear in grant applications: coordinating network pathways across campus boundaries, establishing and re-establishing calibration routines after infrastructure updates, producing documentation legible to people who were not present at the original setup, providing real-time support during rehearsals when something behaved unexpectedly.
None of this is glamorous. All of it is what determines whether musicians can actually use the system on a given Tuesday afternoon.
4. Inclusion that is designed, not assumed
Technology-intensive environments reproduce exclusion reliably unless they are actively designed not to. The mechanisms are familiar: assumed prior experience, cultural signals about who belongs, scheduling that conflicts with caring responsibilities, documentation in a single language, interfaces that reward a particular kind of technical confidence.
For DMLs specifically, there is an additional layer. Networked music performance is genuinely different from co-located performance. The latency conditions require different listening and coordination strategies. For musicians trained in tight synchronous ensemble playing, the first experience of performing over a network is often disorienting — latency is not a technical glitch to be fixed, it is a compositional condition to be understood and worked with.
Framing this as a deficit is pedagogically counterproductive. Framing it as an occasion to develop new artistic vocabulary — to think deliberately about what interaction strategies work at 12 ms versus 22 ms, about how anticipatory listening changes the character of improvisation — turns an obstacle into content. Some of the most interesting musical thinking in our sessions came from participants who were trying to understand why something that was effortless in a rehearsal room required conscious attention over the network.
The Tensions That Do Not Resolve
Being honest about what the paper does not solve:
Project funding versus operational costs. We do not have a structural solution to the mismatch between how labs are funded (innovation grants with defined end dates) and how they need to operate (indefinitely, with predictable maintenance budgets). Collaborative purchasing agreements and shared technical teams across institutions can distribute the burden, but they introduce coordination overhead. There is no clean answer here.
Experimentation versus accountability metrics. Universities and funders want quantifiable outputs. Artistic experimentation often produces its most valuable results as changed practices and new aesthetic understanding — things that do not appear in publication counts or utilisation statistics. The best available response is to be explicit about this mismatch when negotiating evaluation criteria, and to establish review processes that include artistic peers and community partners rather than only administrators. This is possible more often than people think, but it requires someone to argue for it proactively.
Openness versus depth. A lab built for maximum accessibility is not the same as a lab optimised for a specific research agenda, and trying to be both usually means doing neither well. The design question is not which is better but where the tradeoff lies for a particular institution’s mission. CCRMA and IRCAM have made different bets on this axis over decades and both have produced important work. The mistake is not having an opinion about where you sit on the spectrum.
Recommendations
These are for institutions and funders, assembled from what the paper describes as working across multiple DML contexts:
- Treat DMLs as long-term cultural infrastructure. Recurring budget lines for renewal, documentation and support — not just start-up funding.
- Separate your stable backbone from your creative tools. Networking, routing, authentication and documentation should not be rebuilt every time you change your video platform.
- Design governance that does not rely on one person. Distributed stewardship roles, clear succession documentation, operational knowledge treated as shared rather than individual.
- Make invisible labour visible. Technical stewardship, facilitation and community liaison need to appear in hiring, workload models and evaluation — not just in informal practice.
- Lower the floor for participation. Scaffolded onboarding, peer mentoring, programming that supports diverse musical practices and levels of technical experience.
- Sort out data governance before you start recording. Consent, archiving and reuse policies for audio/video, especially when community partners or students are involved.
- Plan for the lab’s eventual obsolescence. Versioning policies, migration plans, criteria for retiring tools. Zombie infrastructures are a governance failure, not a technical one.
- Evaluate on multiple axes. Technical reliability is one. Learning trajectories, student agency, community partnership durability and artistic outcomes are others. Reporting only the first one creates a misleading picture of whether the lab is actually working.
What This Does and Does Not Claim
The argument in the paper is conceptual and practice-informed rather than empirical in the standard sense. We synthesise literature and draw on the HfMT Köln implementation as a vignette — it is an illustration, not a representative sample. The framework we propose (four design principles, the performative infrastructure framing) is offered as an analytical vocabulary for planning and evaluation, not as a validated theory.
What it is useful for: making implicit infrastructure choices explicit, naming tensions before they become crises, and supporting more realistic conversations between artistic users, technical staff and institutional leadership about what it actually takes to make this work.
References
Borgdorff, H. (2012). The Conflict of the Faculties: Perspectives on Artistic Research and Academia. Leiden University Press.
Labbé, D., Zuberec, C., & Turner, S. (2022). Creative hubs in Hanoi, Vietnam: Transgressive spaces in a socialist state? Urban Studies. https://doi.org/10.1177/00420980221086371
McKay, G. (2017). Community music: History and current practice. International Journal of Community Music, 10(2), 129–137. https://doi.org/10.1386/ijcm.10.2.129_1
Morreale, F., Bowers, J., & McPherson, A. (2021). Collaborating in distributed musical partnerships. Computers in Human Behavior, 120, 106757. https://doi.org/10.1016/j.chb.2021.106757
Selwyn, N. (2021). Education and Technology: Key Issues and Debates (3rd ed.). Bloomsbury Academic.
Star, S. L., & Ruhleder, K. (1996). Steps toward an ecology of infrastructure. Information Systems Research, 7(1), 111–134. https://doi.org/10.1287/isre.7.1.111
Wenger, E. (1998). Communities of Practice: Learning, Meaning, and Identity. Cambridge University Press. https://doi.org/10.1017/CBO9780511803932
Changelog
- 2026-01-20: Removed the Chafe (2018) “Stanford CCRMA: A 40-year retrospective” reference, which could not be confirmed in available databases (DOI does not resolve, not listed in Computer Music Journal 42(3)). The body text reference to CCRMA as an institutional example is retained; it does not depend on this citation.
- 2026-01-20: Changed “The term comes from Star and Ruhleder (1996)” to “The concept draws on Star and Ruhleder (1996).” Star and Ruhleder’s paper is the foundational text on relational infrastructure, but they did not coin the specific compound term “performative infrastructure.”