<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Digital-Music-Labs on Sebastian Spicker</title>
    <link>https://sebastianspicker.github.io/tags/digital-music-labs/</link>
    <description>Recent content in Digital-Music-Labs on Sebastian Spicker</description>
    
    <generator>Hugo -- 0.160.0</generator>
    <language>en</language>
    <lastBuildDate>Fri, 14 Jun 2024 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://sebastianspicker.github.io/tags/digital-music-labs/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The Boring Parts of Networked Music Performance</title>
      <link>https://sebastianspicker.github.io/posts/digital-music-labs-infrastructure/</link>
      <pubDate>Fri, 14 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://sebastianspicker.github.io/posts/digital-music-labs-infrastructure/</guid>
      <description>A follow-up to the August 2023 latency post. The numbers were fine. The hard part turned out to be everything else: governance, maintenance, invisible labour, and why most Digital Music Labs quietly die after the grant ends.</description>
      <content:encoded><![CDATA[<p><em>This post is based on a manuscript in progress with colleagues from the
RAPP Lab network. It builds directly on the <a href="/posts/nmp-latency-lola-mvtp/">August 2023 latency measurements</a>. That post covered what the
numbers look like. This one covers why getting to those numbers was the
easy part.</em></p>
<hr>
<h2 id="the-setup">The Setup</h2>
<p>After spending two and a half years measuring latency across six European
research-network links, I can tell you that the audio numbers are achievable.
7.5 to 22.5 ms one-way across Prague to Tallinn, LoLa and MVTP both working,
musicians playing together across national borders in real time. Technically,
that story has a satisfying ending.</p>
<p>What the measurement paper does not capture is everything that had to be true
institutionally before we could run a single test. The firewall negotiations.
The repeated calibration sessions. The network configuration that nobody
outside our small group knew how to reproduce when someone left. The grant that
funded the equipment but not the person who kept it running. The performance
session that nearly collapsed because a campus IT update had silently changed a
routing rule three days prior.</p>
<p>The technical infrastructure worked. The institutional infrastructure around it
was precarious in ways that only became visible when something broke.</p>
<p>This is what the follow-up paper tries to name.</p>
<hr>
<h2 id="what-is-a-digital-music-lab-actually">What Is a Digital Music Lab, Actually?</h2>
<p>The term gets applied to everything from a laptop cart in a classroom to
IRCAM Paris. We use it to mean something specific: a <strong>Digital Music Lab
(DML)</strong> is a hybrid environment where space, equipment, software, personnel
and organisational routines are configured together to support iterative
artistic experimentation, research-led learning and outward-facing engagement.</p>
<p>The key word is <em>configured together</em>. A room full of excellent hardware is not
a DML any more than a library is just a building full of books. What makes
either work is an invisible layer of social organisation: access policies,
shared norms, maintained documentation, people who know what to do when
something breaks.</p>
<p>We borrow a concept from infrastructure studies to describe this:
<strong>performative infrastructure</strong>. The concept draws on Star and Ruhleder (1996),
and it captures something precise — that infrastructure does not merely
<em>enable</em> activity, it also <em>shapes</em> what kinds of activity are possible in the
first place. The decision to use LoLa rather than Zoom is not just a technical
choice; it is an institutional statement about what kind of musical interaction
this space is designed to support, and about who is expected to use it.</p>
<p>This framing matters because it shifts the design question. You are not asking
&ldquo;what equipment should we buy?&rdquo; You are asking &ldquo;what kind of practice do we
want to make possible, and what organisational conditions make that practice
sustainable?&rdquo;</p>
<hr>
<h2 id="four-things-that-actually-determine-whether-a-dml-survives">Four Things That Actually Determine Whether a DML Survives</h2>
<h3 id="1-flexible-by-design-not-by-accident">1. Flexible by design, not by accident</h3>
<p>Resilient labs resist the temptation to optimise for one use case. The systems
that have lasted — Stanford CCRMA is the obvious reference point, nearly five decades
and counting — tend to separate a stable core (networking, routing,
authentication, documentation) from a more rapidly changing layer of creative
tools and workflows. The core does not change when you switch DAWs or update
your streaming platform. The tools on top of it can.</p>
<p>This sounds obvious. In practice it means being deliberate about which
dependencies you are willing to accept. A lab built on a single vendor
ecosystem can offer tight integration, but it creates a single point of
failure and a maintenance contract you will be negotiating forever. A lab built
on open protocols and well-documented configurations is more work to set up and
less work to sustain.</p>
<p>The other thing flexibility buys is pedagogical range. The same environment
can host an introductory workshop, an advanced performance-research project and
a public-facing concert without requiring incompatible reconfiguration for each.
This is not a luxury. It is what makes a DML worth the overhead compared to
just booking a studio.</p>
<h3 id="2-governance-that-survives-personnel-turnover">2. Governance that survives personnel turnover</h3>
<p>The single most dangerous sentence in any DML is: <em>&ldquo;We can ask [person] — they
know how it works.&rdquo;</em></p>
<p>Every lab has that person. The one who configured the routing. The one who
knows which cable does what. The one who has the institutional memory of every
workaround and edge case. When that person moves on, the lab frequently becomes
unreliable within six months and functionally inaccessible within a year — even
if all the equipment is still there. We call these <strong>zombie infrastructures</strong>:
technically present, functionally dead.</p>
<p>The corrective is not to document everything (though that helps). It is to
design governance so that knowledge is distributed by default. Distributed
stewardship roles — student assistants, rotating committees, peer mentors —
mean that multiple people develop operational knowledge as a matter of routine,
not as emergency knowledge transfer when someone announces they are leaving.</p>
<p>Technical staff need to be treated as co-creators in this model, not as
service providers. When networked performance is framed as peripheral
experimentation rather than core infrastructure, maintenance becomes precarious
and invisible. When it is framed as core, collaboration between artistic and
technical roles becomes institutional routine.</p>
<h3 id="3-maintenance-as-a-budget-line-not-an-afterthought">3. Maintenance as a budget line, not an afterthought</h3>
<p>Here is the infrastructure paradox: systems are valued for enabling novelty,
but they require boring, recurring investment to remain usable. Project funding
solves the novelty problem. It almost never solves the maintenance problem.</p>
<p>The costs that make a lab reliable are not one-off:</p>
<ul>
<li>Staff continuity (or explicit knowledge transfer when staff change)</li>
<li>Documentation that is actively maintained, not written once and forgotten</li>
<li>Renewal cycles for hardware and software that actually match the pace of
change in the underlying ecosystem</li>
<li>User support during active sessions, not just during setup</li>
</ul>
<p>At HfMT Köln, the operational work that dominated actual implementation time
was none of the things that appear in grant applications: coordinating network
pathways across campus boundaries, establishing and re-establishing calibration
routines after infrastructure updates, producing documentation legible to
people who were not present at the original setup, providing real-time support
during rehearsals when something behaved unexpectedly.</p>
<p>None of this is glamorous. All of it is what determines whether musicians can
actually use the system on a given Tuesday afternoon.</p>
<h3 id="4-inclusion-that-is-designed-not-assumed">4. Inclusion that is designed, not assumed</h3>
<p>Technology-intensive environments reproduce exclusion reliably unless they are
actively designed not to. The mechanisms are familiar: assumed prior
experience, cultural signals about who belongs, scheduling that conflicts with
caring responsibilities, documentation in a single language, interfaces that
reward a particular kind of technical confidence.</p>
<p>For DMLs specifically, there is an additional layer. Networked music performance
is genuinely different from co-located performance. The latency conditions
require different listening and coordination strategies. For musicians trained
in tight synchronous ensemble playing, the first experience of performing over
a network is often disorienting — latency is not a technical glitch to be fixed,
it is a compositional condition to be understood and worked with.</p>
<p>Framing this as a deficit is pedagogically counterproductive. Framing it as an
occasion to develop new artistic vocabulary — to think deliberately about what
interaction strategies work at 12 ms versus 22 ms, about how anticipatory
listening changes the character of improvisation — turns an obstacle into
content. Some of the most interesting musical thinking in our sessions came
from participants who were trying to understand why something that was
effortless in a rehearsal room required conscious attention over the network.</p>
<hr>
<h2 id="the-tensions-that-do-not-resolve">The Tensions That Do Not Resolve</h2>
<p>Being honest about what the paper does not solve:</p>
<p><strong>Project funding versus operational costs.</strong> We do not have a structural
solution to the mismatch between how labs are funded (innovation grants with
defined end dates) and how they need to operate (indefinitely, with predictable
maintenance budgets). Collaborative purchasing agreements and shared technical
teams across institutions can distribute the burden, but they introduce
coordination overhead. There is no clean answer here.</p>
<p><strong>Experimentation versus accountability metrics.</strong> Universities and funders
want quantifiable outputs. Artistic experimentation often produces its most
valuable results as changed practices and new aesthetic understanding — things
that do not appear in publication counts or utilisation statistics. The best
available response is to be explicit about this mismatch when negotiating
evaluation criteria, and to establish review processes that include artistic
peers and community partners rather than only administrators. This is possible
more often than people think, but it requires someone to argue for it
proactively.</p>
<p><strong>Openness versus depth.</strong> A lab built for maximum accessibility is not the
same as a lab optimised for a specific research agenda, and trying to be both
usually means doing neither well. The design question is not which is better
but where the tradeoff lies for a particular institution&rsquo;s mission. CCRMA and
IRCAM have made different bets on this axis over decades and both have produced
important work. The mistake is not having an opinion about where you sit on
the spectrum.</p>
<hr>
<h2 id="recommendations">Recommendations</h2>
<p>These are for institutions and funders, assembled from what the paper
describes as working across multiple DML contexts:</p>
<ul>
<li><strong>Treat DMLs as long-term cultural infrastructure.</strong> Recurring budget lines
for renewal, documentation and support — not just start-up funding.</li>
<li><strong>Separate your stable backbone from your creative tools.</strong> Networking,
routing, authentication and documentation should not be rebuilt every time
you change your video platform.</li>
<li><strong>Design governance that does not rely on one person.</strong> Distributed
stewardship roles, clear succession documentation, operational knowledge
treated as shared rather than individual.</li>
<li><strong>Make invisible labour visible.</strong> Technical stewardship, facilitation and
community liaison need to appear in hiring, workload models and evaluation
— not just in informal practice.</li>
<li><strong>Lower the floor for participation.</strong> Scaffolded onboarding, peer mentoring,
programming that supports diverse musical practices and levels of technical
experience.</li>
<li><strong>Sort out data governance before you start recording.</strong> Consent, archiving
and reuse policies for audio/video, especially when community partners or
students are involved.</li>
<li><strong>Plan for the lab&rsquo;s eventual obsolescence.</strong> Versioning policies, migration
plans, criteria for retiring tools. Zombie infrastructures are a governance
failure, not a technical one.</li>
<li><strong>Evaluate on multiple axes.</strong> Technical reliability is one. Learning
trajectories, student agency, community partnership durability and artistic
outcomes are others. Reporting only the first one creates a misleading
picture of whether the lab is actually working.</li>
</ul>
<hr>
<h2 id="what-this-does-and-does-not-claim">What This Does and Does Not Claim</h2>
<p>The argument in the paper is conceptual and practice-informed rather than
empirical in the standard sense. We synthesise literature and draw on the
HfMT Köln implementation as a vignette — it is an illustration, not a
representative sample. The framework we propose (four design principles, the
performative infrastructure framing) is offered as an analytical vocabulary
for planning and evaluation, not as a validated theory.</p>
<p>What it is useful for: making implicit infrastructure choices explicit, naming
tensions before they become crises, and supporting more realistic conversations
between artistic users, technical staff and institutional leadership about what
it actually takes to make this work.</p>
<hr>
<h2 id="references">References</h2>
<p>Borgdorff, H. (2012). <em>The Conflict of the Faculties: Perspectives on
Artistic Research and Academia.</em> Leiden University Press.</p>
<p>Labbé, D., Zuberec, C., &amp; Turner, S. (2022). Creative hubs in Hanoi,
Vietnam: Transgressive spaces in a socialist state? <em>Urban Studies</em>.
<a href="https://doi.org/10.1177/00420980221086371">https://doi.org/10.1177/00420980221086371</a></p>
<p>McKay, G. (2017). Community music: History and current practice.
<em>International Journal of Community Music</em>, 10(2), 129–137.
<a href="https://doi.org/10.1386/ijcm.10.2.129_1">https://doi.org/10.1386/ijcm.10.2.129_1</a></p>
<p>Morreale, F., Bowers, J., &amp; McPherson, A. (2021). Collaborating in
distributed musical partnerships. <em>Computers in Human Behavior</em>, 120,
106757. <a href="https://doi.org/10.1016/j.chb.2021.106757">https://doi.org/10.1016/j.chb.2021.106757</a></p>
<p>Selwyn, N. (2021). <em>Education and Technology: Key Issues and Debates</em>
(3rd ed.). Bloomsbury Academic.</p>
<p>Star, S. L., &amp; Ruhleder, K. (1996). Steps toward an ecology of
infrastructure. <em>Information Systems Research</em>, 7(1), 111–134.
<a href="https://doi.org/10.1287/isre.7.1.111">https://doi.org/10.1287/isre.7.1.111</a></p>
<p>Wenger, E. (1998). <em>Communities of Practice: Learning, Meaning, and
Identity.</em> Cambridge University Press.
<a href="https://doi.org/10.1017/CBO9780511803932">https://doi.org/10.1017/CBO9780511803932</a></p>
<hr>
<h2 id="changelog">Changelog</h2>
<ul>
<li><strong>2026-01-20</strong>: Removed the Chafe (2018) &ldquo;Stanford CCRMA: A 40-year retrospective&rdquo; reference, which could not be confirmed in available databases (DOI does not resolve, not listed in <em>Computer Music Journal</em> 42(3)). The body text reference to CCRMA as an institutional example is retained; it does not depend on this citation.</li>
<li><strong>2026-01-20</strong>: Changed &ldquo;The term comes from Star and Ruhleder (1996)&rdquo; to &ldquo;The concept draws on Star and Ruhleder (1996).&rdquo; Star and Ruhleder&rsquo;s paper is the foundational text on relational infrastructure, but they did not coin the specific compound term &ldquo;performative infrastructure.&rdquo;</li>
</ul>
]]></content:encoded>
    </item>
  </channel>
</rss>
