<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Networked-Music-Performance on Sebastian Spicker</title>
    <link>https://sebastianspicker.github.io/tags/networked-music-performance/</link>
    <description>Recent content in Networked-Music-Performance on Sebastian Spicker</description>
    
    <generator>Hugo -- 0.160.0</generator>
    <language>en</language>
    <lastBuildDate>Fri, 22 Nov 2024 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://sebastianspicker.github.io/tags/networked-music-performance/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>After the Connection Is Stable, the Hard Part Begins</title>
      <link>https://sebastianspicker.github.io/posts/nmp-curriculum-reflective-practice/</link>
      <pubDate>Fri, 22 Nov 2024 00:00:00 +0000</pubDate>
      <guid>https://sebastianspicker.github.io/posts/nmp-curriculum-reflective-practice/</guid>
      <description>A third post in the networked music performance series. Technical latency is solved. Institutional infrastructure has a name. What students actually learn — and what conservatoire curricula consistently get wrong about teaching it — turns out to be a different problem entirely.</description>
      <content:encoded><![CDATA[<p><em>Third post in a series. The <a href="/posts/nmp-latency-lola-mvtp/">August 2023 post</a>
covered latency measurements across six European research-network links.
The <a href="/posts/digital-music-labs-infrastructure/">June 2024 post</a> covered
what institutional infrastructure needs to look like for any of that to
be sustainably usable. This one covers what happens after both of those
problems are solved — which is when the genuinely interesting educational
challenges start.</em></p>
<p><em>Based on a manuscript with colleagues from the RAPP Lab. Not yet peer-reviewed.</em></p>
<hr>
<h2 id="the-gap-nobody-talks-about">The Gap Nobody Talks About</h2>
<p>There is a version of the NMP success story that stops too early. It goes: we
installed LoLa, measured the latency, it came in at 9.5 ms to Vienna, the
musicians played together across 745 km, it worked. Success.</p>
<p>What this story skips is the classroom after the demo. The student who can
follow a setup checklist perfectly and still has no idea what to do musically
when the connection is stable. The ensemble that gets a clean signal running
and then plays exactly the same repertoire in exactly the same way they would
in a co-present rehearsal, fighting the latency instead of working with it,
frustrated when it does not feel right. The assessment rubric that checks off
&ldquo;maintained stable connection&rdquo; and &ldquo;completed the performance&rdquo; and has nothing
to say about everything that actually constitutes musical learning in a
networked context.</p>
<p>The gap between <em>technical feasibility</em> and <em>educational transformation</em> is
the subject of this post. Closing it turns out to require a different kind of
curriculum design than most conservatoires have tried.</p>
<hr>
<h2 id="what-gets-taught-versus-what-needs-to-be-learned">What Gets Taught Versus What Needs to Be Learned</h2>
<p>The default curricular response to NMP has been to treat it as a technical
skill with an artistic application. Students learn to configure an audio
interface, manage routing, establish a LoLa connection, and then — implicitly
— go do music. The technical content gets staged as a prerequisite to the
&ldquo;real&rdquo; work.</p>
<p>This ordering is wrong in a specific way. Technical setup work is genuinely
necessary, but making it a prerequisite treats the relationship between
technology and musical practice as sequential rather than recursive. In
practice, the interesting musical problems only become visible <em>through</em> the
technical ones. A student does not understand why buffer size matters until
they have felt the difference between a 5 ms and a 40 ms offset in a
coordination-intensive passage. A student does not develop an opinion about
audio routing configurations until they have experienced a rehearsal collapse
caused by a routing error they could have prevented.</p>
<p>The RAPP Lab&rsquo;s recurring insight across several years of module iterations
at HfMT Köln was more direct: once learners can establish a stable connection,
the harder challenge is developing artistic, collaborative and reflective
strategies for making music <em>together apart</em>. Technical fluency is a
foundation, not a destination.</p>
<hr>
<h2 id="the-curriculum-we-ended-up-with">The Curriculum We Ended Up With</h2>
<p>It took several cycles to get there. The early format was weekend workshops —
open, exploratory, no formal assessment, primarily for advanced students who
self-selected in. These were useful precisely because they were informal: they
revealed quickly how technical and musical questions become inextricable once
you are actually playing, and they gave us evidence about where students got
stuck that we would not have found from a needs analysis.</p>
<p>Over time, elements of those workshops were developed into recurring
curriculum-embedded modules, which then fed into independent study projects
and eventually into external collaborations and performances. The trajectory
mattered: moving from a one-off event to something longitudinal meant that
knowledge built across cohorts rather than resetting every time.</p>
<p>The module structure that emerged has three interlocking elements:</p>
<p><strong>Progressive task design.</strong> Early sessions are tightly scoped:
specific technical-musical exercises, limited repertoire, well-defined
success criteria. Later sessions move toward open-ended projects, student-led
rehearsal planning, and eventually cross-institutional partnerships where
variables are genuinely outside anyone&rsquo;s control. The point of the early
constraints is not to make things easier — it is to create conditions where
students can notice what they are doing rather than just surviving.</p>
<p><strong>Journals and debriefs.</strong> Students kept individual reflective journals
throughout modules, documenting not just what happened but how they responded
to it — technical problems, musical decisions, moments of coordination failure
and recovery, questions they could not answer at the time. Group debriefs
after each rehearsal then turned those individual threads into collective
knowledge: comparing strategies, naming the problems that came up repeatedly,
developing shared language for rehearsal coordination.</p>
<p>The debrief is the part of this model that I think gets undervalued. It is
not just reflection — it is <em>curriculum production</em>. Strategies that emerged
from one cohort&rsquo;s debriefs became documented starting points for subsequent
cohorts. Knowledge accumulated rather than evaporating when the semester ended.</p>
<p><strong>Portfolio assessment.</strong> Rather than assessing primarily on a final
performance, students assembled portfolios that could include curated journal
excerpts, rehearsal documentation, reflective syntheses, and accounts of
how their thinking changed. The question being assessed was not &ldquo;did you play
the concert&rdquo; but &ldquo;can you articulate why you made the decisions you made, and
what you would do differently.&rdquo;</p>
<hr>
<h2 id="what-students-actually-learn-when-the-curriculum-works">What Students Actually Learn (When the Curriculum Works)</h2>
<p>Four outcomes recurred across the RAPP Lab iterations, consistently enough
to be worth naming:</p>
<h3 id="1-technical-agency">1. Technical agency</h3>
<p>This is different from technical competence. Competence means you can follow
a procedure. Agency means you understand the procedure well enough to deviate
from it intelligently when something goes wrong — to diagnose what failed,
generate a hypothesis about why, and try something different.</p>
<p>The shift happened when students stopped treating technical problems as
interruptions to the music and started treating them as information about
the system they were working inside. A dropout is not just an annoyance; it
is evidence about where the failure occurred. Getting to that reframe took,
on average, several weeks of structured reflection. It did not happen from
reading documentation.</p>
<h3 id="2-adaptive-improvisation">2. Adaptive improvisation</h3>
<p>Latency changes what real-time musical coordination can mean. You cannot rely
on the same multimodal cues — breath, gesture, shared acoustics — that make
co-present ensemble playing feel intuitive. You have to develop explicit
cueing systems, turn-taking conventions, contingency plans for when the
connection degrades mid-performance.</p>
<p>What we observed was that this constraint generated a specific kind of
musical creativity. Students improvised not just with musical material but
with rehearsal organisation itself — inventing systems, testing them,
discarding the ones that did not work, documenting the ones that did. Some of
the most musically interesting moments in the modules came from sessions where
the technology was behaving badly and students had to make it work anyway.</p>
<p>There is research on &ldquo;productive failure&rdquo; — deliberately designing tasks that
exceed students&rsquo; current control, because the struggle and recovery produces
deeper learning than smooth execution (Kapur 2016). NMP turns out to be a
natural context for this, not by design but because the network does not
cooperate on schedule.</p>
<h3 id="3-collaborative-communication">3. Collaborative communication</h3>
<p>Co-present rehearsal relies heavily on implicit communication: the
physical space makes many things legible without anyone having to say them.
In a networked rehearsal, the spatial and gestural channel is degraded or
absent. Students had to make explicit what would normally be implicit —
articulating coordination strategies, naming the problems they were
experiencing rather than hoping the ensemble would notice, developing a
vocabulary for talking about timing and latency as musical parameters.</p>
<p>This turned out to generalize. Students who had worked through several
networked rehearsal cycles were noticeably better at explicit musical
communication in co-present settings too, because they had been forced to
develop the vocabulary in a context where it was necessary.</p>
<h3 id="4-reflective-identity">4. Reflective identity</h3>
<p>The students who got the most out of the modules were the ones who stopped
waiting for the conditions to improve and started working with the conditions
as they were. Latency as a compositional constraint rather than a defect to
be routed around. Uncertainty as an artistic condition rather than a
technical failure.</p>
<p>The journal entries where this shift is most visible are not the ones that
describe what the student did. They are the ones that describe a change in
how the student understands their own practice — who they are as a musician
in relation to an environment they cannot fully control. That is a different
kind of outcome than anything a timing metric captures.</p>
<hr>
<h2 id="the-assessment-problem">The Assessment Problem</h2>
<p>The hardest part of all of this to translate into institutional language is
assessment. The conservatoire has well-developed frameworks for evaluating
performances. It has much weaker frameworks for evaluating the learning that
happens before and between and underneath performances.</p>
<p>Checklist rubrics — was the connection stable, was the latency within
acceptable range, did the performance complete — are useful for safety and
reliability. They are poor evidence for whether a student has developed the
capacity to work reflectively and artistically in a mediated ensemble
environment. A student who achieved a stable connection by following
instructions exactly and a student who achieved it by diagnosing a routing
error mid-session look identical on a checklist. They have had very different
learning experiences.</p>
<p>Portfolio assessment addresses this by making the reasoning visible. When a
student can explain why they chose a particular buffer configuration given
the specific network characteristics of that session, how that choice affected
the musical phrasing in the piece they were rehearsing, and what they would
change next time — that is evidence of something real. It is also harder to
assess than a timing log, which is probably why most programmes avoid it.</p>
<p>The argument is not that quantitative indicators are useless. It is that
they function better as scaffolding for reflective judgement than as the
primary evidence of learning. Mixed assessment ecologies — technical logs
plus journals plus portfolio syntheses — are more honest about what is
actually happening educationally.</p>
<hr>
<h2 id="what-this-does-not-solve">What This Does Not Solve</h2>
<p>The model described here depends on teaching staff who can facilitate
reflective dialogue, curate knowledge across cohorts, and participate in
iterative curriculum redesign. That is a specific professional competence
that is not automatically present in a conservatoire staffed primarily by
performing musicians. The training and support structures needed to develop
it are an open question this paper does not fully answer.</p>
<p>The curriculum is also not portable as-is. The RAPP Lab model emerged in a
specific institutional context — HfMT Köln, specific partner network,
specific funding structure, specific cohort of students. The four outcomes
and the general pedagogical logic may transfer; the specific formats will
need adaptation. Any institution that tries to implement this without going
through at least one cycle of their own iterative development is likely to
end up with a checklist version of something that works only when it is a
living process.</p>
<p>And the technology keeps moving. LoLa is a mature platform but the
ecosystem around it — network configurations, operating system support,
hardware lifecycles — changes faster than curriculum documentation. Building
responsiveness into the curriculum itself, rather than treating it as a fixed
syllabus, is the structural answer. Easier to recommend than to institutionalise.</p>
<hr>
<h2 id="references">References</h2>
<p>Barrett, H. C. (2007). Researching electronic portfolios and learner
engagement. <em>Journal of Adolescent &amp; Adult Literacy</em>, 50(6), 436–449.</p>
<p>Borgdorff, H. (2012). <em>The Conflict of the Faculties.</em> Leiden University Press.</p>
<p>The Design-Based Research Collective (2003). Design-based research: An
emerging paradigm for educational inquiry. <em>Educational Researcher</em>, 32(1),
5–8.</p>
<p>Kapur, M. (2016). Examining productive failure, productive success,
unproductive failure, and unproductive success in learning. <em>Educational
Psychologist</em>, 51(2), 289–299. <a href="https://doi.org/10.1080/00461520.2016.1155457">https://doi.org/10.1080/00461520.2016.1155457</a></p>
<p>Lave, J. &amp; Wenger, E. (1991). <em>Situated Learning.</em> Cambridge University Press.</p>
<p>Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for
assessment and grading. <em>Assessment &amp; Evaluation in Higher Education</em>,
34(2), 159–179. <a href="https://doi.org/10.1080/02602930801956059">https://doi.org/10.1080/02602930801956059</a></p>
<p>Schön, D. A. (1983). <em>The Reflective Practitioner.</em> Basic Books.</p>
<p>Wenger, E. (1998). <em>Communities of Practice.</em> Cambridge University Press.
<a href="https://doi.org/10.1017/CBO9780511803932">https://doi.org/10.1017/CBO9780511803932</a></p>
<hr>
<h2 id="changelog">Changelog</h2>
<ul>
<li><strong>2026-01-20</strong>: Updated the Sadler (2009) reference title to &ldquo;Indeterminacy in the use of preset criteria for assessment and grading,&rdquo; matching the journal article at this DOI. Updated the Kapur (2016) reference to the full published title: &ldquo;Examining productive failure, productive success, unproductive failure, and unproductive success in learning.&rdquo;</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>The Boring Parts of Networked Music Performance</title>
      <link>https://sebastianspicker.github.io/posts/digital-music-labs-infrastructure/</link>
      <pubDate>Fri, 14 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://sebastianspicker.github.io/posts/digital-music-labs-infrastructure/</guid>
      <description>A follow-up to the August 2023 latency post. The numbers were fine. The hard part turned out to be everything else: governance, maintenance, invisible labour, and why most Digital Music Labs quietly die after the grant ends.</description>
      <content:encoded><![CDATA[<p><em>This post is based on a manuscript in progress with colleagues from the
RAPP Lab network. It builds directly on the <a href="/posts/nmp-latency-lola-mvtp/">August 2023 latency measurements</a>. That post covered what the
numbers look like. This one covers why getting to those numbers was the
easy part.</em></p>
<hr>
<h2 id="the-setup">The Setup</h2>
<p>After spending two and a half years measuring latency across six European
research-network links, I can tell you that the audio numbers are achievable.
7.5 to 22.5 ms one-way across Prague to Tallinn, LoLa and MVTP both working,
musicians playing together across national borders in real time. Technically,
that story has a satisfying ending.</p>
<p>What the measurement paper does not capture is everything that had to be true
institutionally before we could run a single test. The firewall negotiations.
The repeated calibration sessions. The network configuration that nobody
outside our small group knew how to reproduce when someone left. The grant that
funded the equipment but not the person who kept it running. The performance
session that nearly collapsed because a campus IT update had silently changed a
routing rule three days prior.</p>
<p>The technical infrastructure worked. The institutional infrastructure around it
was precarious in ways that only became visible when something broke.</p>
<p>This is what the follow-up paper tries to name.</p>
<hr>
<h2 id="what-is-a-digital-music-lab-actually">What Is a Digital Music Lab, Actually?</h2>
<p>The term gets applied to everything from a laptop cart in a classroom to
IRCAM Paris. We use it to mean something specific: a <strong>Digital Music Lab
(DML)</strong> is a hybrid environment where space, equipment, software, personnel
and organisational routines are configured together to support iterative
artistic experimentation, research-led learning and outward-facing engagement.</p>
<p>The key word is <em>configured together</em>. A room full of excellent hardware is not
a DML any more than a library is just a building full of books. What makes
either work is an invisible layer of social organisation: access policies,
shared norms, maintained documentation, people who know what to do when
something breaks.</p>
<p>We borrow a concept from infrastructure studies to describe this:
<strong>performative infrastructure</strong>. The concept draws on Star and Ruhleder (1996),
and it captures something precise — that infrastructure does not merely
<em>enable</em> activity, it also <em>shapes</em> what kinds of activity are possible in the
first place. The decision to use LoLa rather than Zoom is not just a technical
choice; it is an institutional statement about what kind of musical interaction
this space is designed to support, and about who is expected to use it.</p>
<p>This framing matters because it shifts the design question. You are not asking
&ldquo;what equipment should we buy?&rdquo; You are asking &ldquo;what kind of practice do we
want to make possible, and what organisational conditions make that practice
sustainable?&rdquo;</p>
<hr>
<h2 id="four-things-that-actually-determine-whether-a-dml-survives">Four Things That Actually Determine Whether a DML Survives</h2>
<h3 id="1-flexible-by-design-not-by-accident">1. Flexible by design, not by accident</h3>
<p>Resilient labs resist the temptation to optimise for one use case. The systems
that have lasted — Stanford CCRMA is the obvious reference point, nearly five decades
and counting — tend to separate a stable core (networking, routing,
authentication, documentation) from a more rapidly changing layer of creative
tools and workflows. The core does not change when you switch DAWs or update
your streaming platform. The tools on top of it can.</p>
<p>This sounds obvious. In practice it means being deliberate about which
dependencies you are willing to accept. A lab built on a single vendor
ecosystem can offer tight integration, but it creates a single point of
failure and a maintenance contract you will be negotiating forever. A lab built
on open protocols and well-documented configurations is more work to set up and
less work to sustain.</p>
<p>The other thing flexibility buys is pedagogical range. The same environment
can host an introductory workshop, an advanced performance-research project and
a public-facing concert without requiring incompatible reconfiguration for each.
This is not a luxury. It is what makes a DML worth the overhead compared to
just booking a studio.</p>
<h3 id="2-governance-that-survives-personnel-turnover">2. Governance that survives personnel turnover</h3>
<p>The single most dangerous sentence in any DML is: <em>&ldquo;We can ask [person] — they
know how it works.&rdquo;</em></p>
<p>Every lab has that person. The one who configured the routing. The one who
knows which cable does what. The one who has the institutional memory of every
workaround and edge case. When that person moves on, the lab frequently becomes
unreliable within six months and functionally inaccessible within a year — even
if all the equipment is still there. We call these <strong>zombie infrastructures</strong>:
technically present, functionally dead.</p>
<p>The corrective is not to document everything (though that helps). It is to
design governance so that knowledge is distributed by default. Distributed
stewardship roles — student assistants, rotating committees, peer mentors —
mean that multiple people develop operational knowledge as a matter of routine,
not as emergency knowledge transfer when someone announces they are leaving.</p>
<p>Technical staff need to be treated as co-creators in this model, not as
service providers. When networked performance is framed as peripheral
experimentation rather than core infrastructure, maintenance becomes precarious
and invisible. When it is framed as core, collaboration between artistic and
technical roles becomes institutional routine.</p>
<h3 id="3-maintenance-as-a-budget-line-not-an-afterthought">3. Maintenance as a budget line, not an afterthought</h3>
<p>Here is the infrastructure paradox: systems are valued for enabling novelty,
but they require boring, recurring investment to remain usable. Project funding
solves the novelty problem. It almost never solves the maintenance problem.</p>
<p>The costs that make a lab reliable are not one-off:</p>
<ul>
<li>Staff continuity (or explicit knowledge transfer when staff change)</li>
<li>Documentation that is actively maintained, not written once and forgotten</li>
<li>Renewal cycles for hardware and software that actually match the pace of
change in the underlying ecosystem</li>
<li>User support during active sessions, not just during setup</li>
</ul>
<p>At HfMT Köln, the operational work that dominated actual implementation time
was none of the things that appear in grant applications: coordinating network
pathways across campus boundaries, establishing and re-establishing calibration
routines after infrastructure updates, producing documentation legible to
people who were not present at the original setup, providing real-time support
during rehearsals when something behaved unexpectedly.</p>
<p>None of this is glamorous. All of it is what determines whether musicians can
actually use the system on a given Tuesday afternoon.</p>
<h3 id="4-inclusion-that-is-designed-not-assumed">4. Inclusion that is designed, not assumed</h3>
<p>Technology-intensive environments reproduce exclusion reliably unless they are
actively designed not to. The mechanisms are familiar: assumed prior
experience, cultural signals about who belongs, scheduling that conflicts with
caring responsibilities, documentation in a single language, interfaces that
reward a particular kind of technical confidence.</p>
<p>For DMLs specifically, there is an additional layer. Networked music performance
is genuinely different from co-located performance. The latency conditions
require different listening and coordination strategies. For musicians trained
in tight synchronous ensemble playing, the first experience of performing over
a network is often disorienting — latency is not a technical glitch to be fixed,
it is a compositional condition to be understood and worked with.</p>
<p>Framing this as a deficit is pedagogically counterproductive. Framing it as an
occasion to develop new artistic vocabulary — to think deliberately about what
interaction strategies work at 12 ms versus 22 ms, about how anticipatory
listening changes the character of improvisation — turns an obstacle into
content. Some of the most interesting musical thinking in our sessions came
from participants who were trying to understand why something that was
effortless in a rehearsal room required conscious attention over the network.</p>
<hr>
<h2 id="the-tensions-that-do-not-resolve">The Tensions That Do Not Resolve</h2>
<p>Being honest about what the paper does not solve:</p>
<p><strong>Project funding versus operational costs.</strong> We do not have a structural
solution to the mismatch between how labs are funded (innovation grants with
defined end dates) and how they need to operate (indefinitely, with predictable
maintenance budgets). Collaborative purchasing agreements and shared technical
teams across institutions can distribute the burden, but they introduce
coordination overhead. There is no clean answer here.</p>
<p><strong>Experimentation versus accountability metrics.</strong> Universities and funders
want quantifiable outputs. Artistic experimentation often produces its most
valuable results as changed practices and new aesthetic understanding — things
that do not appear in publication counts or utilisation statistics. The best
available response is to be explicit about this mismatch when negotiating
evaluation criteria, and to establish review processes that include artistic
peers and community partners rather than only administrators. This is possible
more often than people think, but it requires someone to argue for it
proactively.</p>
<p><strong>Openness versus depth.</strong> A lab built for maximum accessibility is not the
same as a lab optimised for a specific research agenda, and trying to be both
usually means doing neither well. The design question is not which is better
but where the tradeoff lies for a particular institution&rsquo;s mission. CCRMA and
IRCAM have made different bets on this axis over decades and both have produced
important work. The mistake is not having an opinion about where you sit on
the spectrum.</p>
<hr>
<h2 id="recommendations">Recommendations</h2>
<p>These are for institutions and funders, assembled from what the paper
describes as working across multiple DML contexts:</p>
<ul>
<li><strong>Treat DMLs as long-term cultural infrastructure.</strong> Recurring budget lines
for renewal, documentation and support — not just start-up funding.</li>
<li><strong>Separate your stable backbone from your creative tools.</strong> Networking,
routing, authentication and documentation should not be rebuilt every time
you change your video platform.</li>
<li><strong>Design governance that does not rely on one person.</strong> Distributed
stewardship roles, clear succession documentation, operational knowledge
treated as shared rather than individual.</li>
<li><strong>Make invisible labour visible.</strong> Technical stewardship, facilitation and
community liaison need to appear in hiring, workload models and evaluation
— not just in informal practice.</li>
<li><strong>Lower the floor for participation.</strong> Scaffolded onboarding, peer mentoring,
programming that supports diverse musical practices and levels of technical
experience.</li>
<li><strong>Sort out data governance before you start recording.</strong> Consent, archiving
and reuse policies for audio/video, especially when community partners or
students are involved.</li>
<li><strong>Plan for the lab&rsquo;s eventual obsolescence.</strong> Versioning policies, migration
plans, criteria for retiring tools. Zombie infrastructures are a governance
failure, not a technical one.</li>
<li><strong>Evaluate on multiple axes.</strong> Technical reliability is one. Learning
trajectories, student agency, community partnership durability and artistic
outcomes are others. Reporting only the first one creates a misleading
picture of whether the lab is actually working.</li>
</ul>
<hr>
<h2 id="what-this-does-and-does-not-claim">What This Does and Does Not Claim</h2>
<p>The argument in the paper is conceptual and practice-informed rather than
empirical in the standard sense. We synthesise literature and draw on the
HfMT Köln implementation as a vignette — it is an illustration, not a
representative sample. The framework we propose (four design principles, the
performative infrastructure framing) is offered as an analytical vocabulary
for planning and evaluation, not as a validated theory.</p>
<p>What it is useful for: making implicit infrastructure choices explicit, naming
tensions before they become crises, and supporting more realistic conversations
between artistic users, technical staff and institutional leadership about what
it actually takes to make this work.</p>
<hr>
<h2 id="references">References</h2>
<p>Borgdorff, H. (2012). <em>The Conflict of the Faculties: Perspectives on
Artistic Research and Academia.</em> Leiden University Press.</p>
<p>Labbé, D., Zuberec, C., &amp; Turner, S. (2022). Creative hubs in Hanoi,
Vietnam: Transgressive spaces in a socialist state? <em>Urban Studies</em>.
<a href="https://doi.org/10.1177/00420980221086371">https://doi.org/10.1177/00420980221086371</a></p>
<p>McKay, G. (2017). Community music: History and current practice.
<em>International Journal of Community Music</em>, 10(2), 129–137.
<a href="https://doi.org/10.1386/ijcm.10.2.129_1">https://doi.org/10.1386/ijcm.10.2.129_1</a></p>
<p>Morreale, F., Bowers, J., &amp; McPherson, A. (2021). Collaborating in
distributed musical partnerships. <em>Computers in Human Behavior</em>, 120,
106757. <a href="https://doi.org/10.1016/j.chb.2021.106757">https://doi.org/10.1016/j.chb.2021.106757</a></p>
<p>Selwyn, N. (2021). <em>Education and Technology: Key Issues and Debates</em>
(3rd ed.). Bloomsbury Academic.</p>
<p>Star, S. L., &amp; Ruhleder, K. (1996). Steps toward an ecology of
infrastructure. <em>Information Systems Research</em>, 7(1), 111–134.
<a href="https://doi.org/10.1287/isre.7.1.111">https://doi.org/10.1287/isre.7.1.111</a></p>
<p>Wenger, E. (1998). <em>Communities of Practice: Learning, Meaning, and
Identity.</em> Cambridge University Press.
<a href="https://doi.org/10.1017/CBO9780511803932">https://doi.org/10.1017/CBO9780511803932</a></p>
<hr>
<h2 id="changelog">Changelog</h2>
<ul>
<li><strong>2026-01-20</strong>: Removed the Chafe (2018) &ldquo;Stanford CCRMA: A 40-year retrospective&rdquo; reference, which could not be confirmed in available databases (DOI does not resolve, not listed in <em>Computer Music Journal</em> 42(3)). The body text reference to CCRMA as an institutional example is retained; it does not depend on this citation.</li>
<li><strong>2026-01-20</strong>: Changed &ldquo;The term comes from Star and Ruhleder (1996)&rdquo; to &ldquo;The concept draws on Star and Ruhleder (1996).&rdquo; Star and Ruhleder&rsquo;s paper is the foundational text on relational infrastructure, but they did not coin the specific compound term &ldquo;performative infrastructure.&rdquo;</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>How Low Can You Go? Measuring Latency for Networked Music Performance Across Europe</title>
      <link>https://sebastianspicker.github.io/posts/nmp-latency-lola-mvtp/</link>
      <pubDate>Sat, 26 Aug 2023 00:00:00 +0000</pubDate>
      <guid>https://sebastianspicker.github.io/posts/nmp-latency-lola-mvtp/</guid>
      <description>We measured end-to-end audio and video latency for LoLa and MVTP across six European research-network links. One-way audio latency ranged from 7.5 to 22.5 ms. Routing topology mattered more than geographic distance. Enterprise firewalls were a disaster. Here is what we found.</description>
      <content:encoded><![CDATA[<p><em>This post summarises a manuscript submitted with Benjamin Bentz and colleagues
from the RAPP Lab network. The paper is not yet peer-reviewed; numbers and
conclusions are based on operational measurements collected 2020–2023.
Feedback welcome — particularly from anyone who has run similar measurements
on non-European or wireless-last-mile links.</em></p>
<hr>
<h2 id="the-problem">The Problem</h2>
<p>Musicians playing together in the same room experience acoustic propagation
delay of roughly 3 ms per metre of separation — essentially free latency that
most ensembles never consciously register. When you distribute musicians across
a network, you inherit that propagation cost plus everything the signal chain
adds on top: buffers, codec processing, routing hops, switching overhead.</p>
<p>Conventional video-conferencing (Zoom, Teams, etc.) operates at end-to-end
delays of roughly 100–300 ms. That is comfortable for speech — human
conversation tolerates round-trip delays up to about 250 ms before it starts
to feel wrong — but it is well above the threshold at which ensemble timing
breaks down. The NMP literature generally puts the upper bound for
synchronous rhythmic playing somewhere between 20 and 30 ms one-way, with
considerable variation by tempo, instrument, and whether the performers can
see each other [Carôt 2011; Tsioutas &amp; Xylomenos 2021; Medina Victoria 2019].</p>
<p>Specialised low-latency systems cut the processing overhead by avoiding
compression, using hardware-accelerated video pipelines, and riding
research-and-education networks that offer better jitter characteristics than
commodity internet. Two of the better-known ones are <strong>LoLa</strong> (Low Latency
Audio Visual Streaming System, developed at Conservatorio G. Tartini Trieste)
and <strong>MVTP</strong> (Modular Video Transmission Platform, developed at CESNET in
Prague). We deployed both at Hochschule für Musik und Tanz Köln as part of
the RAPP Lab collaboration and spent about two and a half years measuring them.</p>
<hr>
<h2 id="the-latency-budget">The Latency Budget</h2>
<p>End-to-end latency in NMP is cumulative and non-recoverable. Once delay enters
the chain, nothing downstream can subtract it. The budget looks like:</p>
\[
  L_\text{total} = L_\text{capture} + L_\text{buffer} + L_\text{network} + L_\text{playback}
\]<p>Network latency \( L_\text{network} \) includes propagation (roughly
\( d / (2 \times 10^8) \) seconds for a fibre link of distance \( d \) metres,
accounting for the refractive index of glass) plus per-hop processing.
Everything else is system-dependent.</p>
<p>The key insight is that \( L_\text{buffer} \) is not fixed — it is a
consequence of jitter. A jittery link forces larger buffers to avoid
underruns, which directly adds to perceived latency. This is why raw bandwidth
is almost irrelevant for NMP: a 1 Gbps link with erratic jitter will perform
worse than a 100 Mbps link with deterministic behaviour.</p>
<hr>
<h2 id="what-we-measured-and-how">What We Measured and How</h2>
<p><strong>Network RTT.</strong> ICMP ping, 1,000 packets per run. We report the median as a
robust summary; the mean is too sensitive to the occasional rogue packet.</p>
<p><strong>End-to-end audio latency.</strong> An audio signal-loop: transmit a test signal
from site A to site B, have site B return it immediately, estimate round-trip
delay by cross-correlation. One-way latency = signal-loop RTT / 2. This method
captures local processing and buffering at both ends in addition to the network
leg, which is what actually matters for a musician.</p>
<p><strong>Video latency.</strong> Component-based estimation (capture frame cadence +
processing pipeline + display). We did not have a frame-accurate video
loopback method, so treat these numbers as estimates rather than precision
measurements. That caveat matters less than it might seem because, as you will
see, video was always slower than audio by a wide enough margin that it did not
drive the operational decisions.</p>
<p><strong>Firewall impact.</strong> A controlled 4-hour session on the Cologne–Vienna link,
alternating between a DMZ configuration (direct research-backbone access) and
a transparent enterprise firewall, logging packet loss and decoder instability.</p>
<p>Six partner institutions, air distances from 175 to 1,655 km, measurements
collected between October 2020 and March 2023.</p>
<hr>
<h2 id="results">Results</h2>
<h3 id="audio-latency">Audio latency</h3>
<table>
  <thead>
      <tr>
          <th>Partner (from Cologne)</th>
          <th>Air distance (km)</th>
          <th>Median RTT (ms)</th>
          <th>One-way audio latency (ms)</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Prague</td>
          <td>535</td>
          <td>5.0</td>
          <td>7.5</td>
      </tr>
      <tr>
          <td>Vienna</td>
          <td>745</td>
          <td>7.0</td>
          <td>9.5</td>
      </tr>
      <tr>
          <td>Detmold</td>
          <td>175</td>
          <td>7.5</td>
          <td>10.0</td>
      </tr>
      <tr>
          <td>Trieste</td>
          <td>775</td>
          <td>10.0</td>
          <td>12.5</td>
      </tr>
      <tr>
          <td>Rome</td>
          <td>1,090</td>
          <td>17.5</td>
          <td>20.0</td>
      </tr>
      <tr>
          <td>Tallinn</td>
          <td>1,465</td>
          <td>19.5</td>
          <td>22.0–22.5</td>
      </tr>
  </tbody>
</table>
<p>The number that jumps out immediately: <strong>Detmold (175 km away) has higher
latency than Vienna (745 km away).</strong> This is a routing issue, not a physics
one. The Detmold link was traversing a less efficient campus path that added
extra hops before reaching the research backbone. Prague, by contrast, was
connected via a particularly short routing path and achieved the lowest latency
of any link despite not being the geographically closest.</p>
<p>The practical implication: geographic distance is a poor predictor of
achievable latency. Measure RTT; do not estimate from a map.</p>
<h3 id="video-latency">Video latency</h3>
<p>Estimated one-way video latency was 20–35 ms across all configurations,
with the dominant contributions coming from frame cadence (at 60 fps, you wait
up to 16.7 ms for a frame to be captured regardless of what the network is
doing) and buffering at the decoder. In every deployment, video consistently
lagged audio. Musicians unsurprisingly fell back on audio for synchronization
and treated video as a supplementary cue — useful for expressive and social
information, not for timing.</p>
<h3 id="the-firewall-experiment">The firewall experiment</h3>
<p>This is the result I find most important for anyone planning a similar
deployment.</p>
<table>
  <thead>
      <tr>
          <th>Metric</th>
          <th>DMZ (no firewall)</th>
          <th>With enterprise firewall</th>
          <th>Change</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Dropped audio packets</td>
          <td>0.002%</td>
          <td>0.052%</td>
          <td>+26×</td>
      </tr>
      <tr>
          <td>Audio buffer realignments/hour</td>
          <td>0.3</td>
          <td>3.9</td>
          <td>+13×</td>
      </tr>
      <tr>
          <td>Dropped video frames</td>
          <td>0.04%</td>
          <td>0.74%</td>
          <td>+18×</td>
      </tr>
      <tr>
          <td>Additional latency</td>
          <td>—</td>
          <td>0.5–1.0 ms</td>
          <td>—</td>
      </tr>
  </tbody>
</table>
<p>The raw latency increase (0.5–1.0 ms) is small and largely irrelevant. The
packet loss and buffer event increases are not. A 26-fold increase in dropped
audio packets on an otherwise uncongested link means the firewall is doing
something — likely deep packet inspection or stateful tracking — that
introduces enough irregularity to destabilise small audio buffers. This forces
you to either accept dropouts or increase buffer size, and increasing buffer
size increases latency.</p>
<p>The message is: if your institution requires traffic inspection for
security policy compliance, you are paying a latency tax that is more about
<em>stability</em> than the raw delay number, and that tax is substantial.</p>
<hr>
<h2 id="discussion">Discussion</h2>
<p>Based on the measured latencies and reported musical tolerances from the
literature, I would roughly characterise the links as follows:</p>
<ul>
<li>
<p><strong>Prague, Vienna, Detmold, Trieste (7.5–12.5 ms):</strong> Compatible with
most repertoire including rhythmically demanding chamber music.
Musicians in our sessions reported the interaction as &ldquo;natural&rdquo; or
&ldquo;like being in the same room&rdquo; at these latencies.</p>
</li>
<li>
<p><strong>Rome (20 ms):</strong> Usable with attention to repertoire and tempo.
Slower movements and music where tight rhythmic locking is not the
primary aesthetic concern work well. Rhythmically dense passages at
fast tempi become harder.</p>
</li>
<li>
<p><strong>Tallinn (22–22.5 ms):</strong> At the upper edge of the comfortable range.
Still usable — we ran a concert collaboration in March 2023 — but
musicians adapt their interaction strategies, leaning more on musical
anticipation than reactive synchronization.</p>
</li>
</ul>
<p>What is notably absent from this data: anything outside the European
research-network context. All six links ran on GÉANT or national backbone
equivalents with favourable jitter characteristics. The numbers almost
certainly do not transfer directly to commodity internet, satellite links, or
mixed-topology paths.</p>
<p><strong>Limitations I want to be explicit about.</strong> The video latency estimates are
component-based, not directly measured, so treat that 20–35 ms range with
appropriate skepticism. The firewall comparison is a single 4-hour session on
a single link; I would not want to extrapolate too aggressively to other
firewall vendors or configurations. And this is an operational measurement
study, not a controlled perceptual experiment — I cannot tell you from this
data at precisely what latency threshold a given ensemble will declare a
session unusable, because that depends on the music, the musicians, and
factors I did not measure.</p>
<hr>
<h2 id="practical-takeaways">Practical Takeaways</h2>
<p>For anyone setting up a similar system:</p>
<ol>
<li><strong>Measure RTT before committing to a partner institution.</strong> A 100 km
difference in air distance can easily be swamped by routing differences.</li>
<li><strong>Get DMZ placement if at all possible.</strong> The firewall results suggest
this matters more than any other single configuration decision.</li>
<li><strong>Minimise campus hops between your endpoint and the research backbone.</strong>
Each additional switching layer adds jitter risk.</li>
<li><strong>Use small audio buffers and monitor for underruns.</strong> If your baseline
RTT is good, your buffer can be small; if underruns increase, that is an
early warning that network stability is degrading before packet loss
becomes audible.</li>
<li><strong>Accept that video will lag audio and design your session accordingly.</strong>
This is not a system failure; it is a consequence of how video pipelines
work at low latency. Plan for it.</li>
</ol>
<hr>
<h2 id="references">References</h2>
<p>Carôt, A. (2011). Low latency audio streaming for Internet-based musical
interaction. <em>Advances in Multimedia and Interactive Technologies</em>.
<a href="https://doi.org/10.4018/978-1-61692-831-5.ch015">https://doi.org/10.4018/978-1-61692-831-5.ch015</a></p>
<p>Drioli, C., Allocchio, C., &amp; Buso, N. (2013). Networked performances and
natural interaction via LOLA. <em>LNCS</em>, 7990, 240–250.
<a href="https://doi.org/10.1007/978-3-642-40050-6_21">https://doi.org/10.1007/978-3-642-40050-6_21</a></p>
<p>Medina Victoria, A. (2019). <em>A method for the measurement of the latency
tolerance range of Western musicians</em>. Ph.D. dissertation, Cork Institute
of Technology (now Munster Technological University).</p>
<p>Rottondi, C., Chafe, C., Allocchio, C., &amp; Sarti, A. (2016). An overview on
networked music performance technologies. <em>IEEE Access</em>, 4, 8823–8843.
<a href="https://doi.org/10.1109/ACCESS.2016.2628440">https://doi.org/10.1109/ACCESS.2016.2628440</a></p>
<p>Tsioutas, K. &amp; Xylomenos, G. (2021). On the impact of audio characteristics
to the quality of musicians experience in network music performance. <em>JAES</em>,
69(12), 914–923. <a href="https://doi.org/10.17743/jaes.2021.0041">https://doi.org/10.17743/jaes.2021.0041</a></p>
<p>Ubik, S., Halak, J., Kolbe, M., Melnikov, J., &amp; Frič, M. (2021). Lessons
learned from distance collaboration in live culture. <em>AISC</em>, 1378, 608–615.
<a href="https://doi.org/10.1007/978-3-030-74009-2_77">https://doi.org/10.1007/978-3-030-74009-2_77</a></p>
<hr>
<h2 id="changelog">Changelog</h2>
<ul>
<li><strong>2026-01-20</strong>: Updated the Drioli et al. (2013) LNCS volume number to 7990 (ECLAP 2013 proceedings). Updated the Ubik et al. (2021) AISC volume number to 1378 and page range to 608–615. Updated the fifth author&rsquo;s surname to &ldquo;Frič.&rdquo;</li>
</ul>
]]></content:encoded>
    </item>
  </channel>
</rss>
