Third post in a series. The August 2023 post covered latency measurements across six European research-network links. The June 2024 post covered what institutional infrastructure needs to look like for any of that to be sustainably usable. This one covers what happens after both of those problems are solved — which is when the genuinely interesting educational challenges start.
Based on a manuscript with colleagues from the RAPP Lab. Not yet peer-reviewed.
The Gap Nobody Talks About
There is a version of the NMP success story that stops too early. It goes: we installed LoLa, measured the latency, it came in at 9.5 ms to Vienna, the musicians played together across 745 km, it worked. Success.
What this story skips is the classroom after the demo. The student who can follow a setup checklist perfectly and still has no idea what to do musically when the connection is stable. The ensemble that gets a clean signal running and then plays exactly the same repertoire in exactly the same way they would in a co-present rehearsal, fighting the latency instead of working with it, frustrated when it does not feel right. The assessment rubric that checks off “maintained stable connection” and “completed the performance” and has nothing to say about everything that actually constitutes musical learning in a networked context.
The gap between technical feasibility and educational transformation is the subject of this post. Closing it turns out to require a different kind of curriculum design than most conservatoires have tried.
What Gets Taught Versus What Needs to Be Learned
The default curricular response to NMP has been to treat it as a technical skill with an artistic application. Students learn to configure an audio interface, manage routing, establish a LoLa connection, and then — implicitly — go do music. The technical content gets staged as a prerequisite to the “real” work.
This ordering is wrong in a specific way. Technical setup work is genuinely necessary, but making it a prerequisite treats the relationship between technology and musical practice as sequential rather than recursive. In practice, the interesting musical problems only become visible through the technical ones. A student does not understand why buffer size matters until they have felt the difference between a 5 ms and a 40 ms offset in a coordination-intensive passage. A student does not develop an opinion about audio routing configurations until they have experienced a rehearsal collapse caused by a routing error they could have prevented.
The RAPP Lab’s recurring insight across several years of module iterations at HfMT Köln was more direct: once learners can establish a stable connection, the harder challenge is developing artistic, collaborative and reflective strategies for making music together apart. Technical fluency is a foundation, not a destination.
The Curriculum We Ended Up With
It took several cycles to get there. The early format was weekend workshops — open, exploratory, no formal assessment, primarily for advanced students who self-selected in. These were useful precisely because they were informal: they revealed quickly how technical and musical questions become inextricable once you are actually playing, and they gave us evidence about where students got stuck that we would not have found from a needs analysis.
Over time, elements of those workshops were developed into recurring curriculum-embedded modules, which then fed into independent study projects and eventually into external collaborations and performances. The trajectory mattered: moving from a one-off event to something longitudinal meant that knowledge built across cohorts rather than resetting every time.
The module structure that emerged has three interlocking elements:
Progressive task design. Early sessions are tightly scoped: specific technical-musical exercises, limited repertoire, well-defined success criteria. Later sessions move toward open-ended projects, student-led rehearsal planning, and eventually cross-institutional partnerships where variables are genuinely outside anyone’s control. The point of the early constraints is not to make things easier — it is to create conditions where students can notice what they are doing rather than just surviving.
Journals and debriefs. Students kept individual reflective journals throughout modules, documenting not just what happened but how they responded to it — technical problems, musical decisions, moments of coordination failure and recovery, questions they could not answer at the time. Group debriefs after each rehearsal then turned those individual threads into collective knowledge: comparing strategies, naming the problems that came up repeatedly, developing shared language for rehearsal coordination.
The debrief is the part of this model that I think gets undervalued. It is not just reflection — it is curriculum production. Strategies that emerged from one cohort’s debriefs became documented starting points for subsequent cohorts. Knowledge accumulated rather than evaporating when the semester ended.
Portfolio assessment. Rather than assessing primarily on a final performance, students assembled portfolios that could include curated journal excerpts, rehearsal documentation, reflective syntheses, and accounts of how their thinking changed. The question being assessed was not “did you play the concert” but “can you articulate why you made the decisions you made, and what you would do differently.”
What Students Actually Learn (When the Curriculum Works)
Four outcomes recurred across the RAPP Lab iterations, consistently enough to be worth naming:
1. Technical agency
This is different from technical competence. Competence means you can follow a procedure. Agency means you understand the procedure well enough to deviate from it intelligently when something goes wrong — to diagnose what failed, generate a hypothesis about why, and try something different.
The shift happened when students stopped treating technical problems as interruptions to the music and started treating them as information about the system they were working inside. A dropout is not just an annoyance; it is evidence about where the failure occurred. Getting to that reframe took, on average, several weeks of structured reflection. It did not happen from reading documentation.
2. Adaptive improvisation
Latency changes what real-time musical coordination can mean. You cannot rely on the same multimodal cues — breath, gesture, shared acoustics — that make co-present ensemble playing feel intuitive. You have to develop explicit cueing systems, turn-taking conventions, contingency plans for when the connection degrades mid-performance.
What we observed was that this constraint generated a specific kind of musical creativity. Students improvised not just with musical material but with rehearsal organisation itself — inventing systems, testing them, discarding the ones that did not work, documenting the ones that did. Some of the most musically interesting moments in the modules came from sessions where the technology was behaving badly and students had to make it work anyway.
There is research on “productive failure” — deliberately designing tasks that exceed students’ current control, because the struggle and recovery produces deeper learning than smooth execution (Kapur 2016). NMP turns out to be a natural context for this, not by design but because the network does not cooperate on schedule.
3. Collaborative communication
Co-present rehearsal relies heavily on implicit communication: the physical space makes many things legible without anyone having to say them. In a networked rehearsal, the spatial and gestural channel is degraded or absent. Students had to make explicit what would normally be implicit — articulating coordination strategies, naming the problems they were experiencing rather than hoping the ensemble would notice, developing a vocabulary for talking about timing and latency as musical parameters.
This turned out to generalize. Students who had worked through several networked rehearsal cycles were noticeably better at explicit musical communication in co-present settings too, because they had been forced to develop the vocabulary in a context where it was necessary.
4. Reflective identity
The students who got the most out of the modules were the ones who stopped waiting for the conditions to improve and started working with the conditions as they were. Latency as a compositional constraint rather than a defect to be routed around. Uncertainty as an artistic condition rather than a technical failure.
The journal entries where this shift is most visible are not the ones that describe what the student did. They are the ones that describe a change in how the student understands their own practice — who they are as a musician in relation to an environment they cannot fully control. That is a different kind of outcome than anything a timing metric captures.
The Assessment Problem
The hardest part of all of this to translate into institutional language is assessment. The conservatoire has well-developed frameworks for evaluating performances. It has much weaker frameworks for evaluating the learning that happens before and between and underneath performances.
Checklist rubrics — was the connection stable, was the latency within acceptable range, did the performance complete — are useful for safety and reliability. They are poor evidence for whether a student has developed the capacity to work reflectively and artistically in a mediated ensemble environment. A student who achieved a stable connection by following instructions exactly and a student who achieved it by diagnosing a routing error mid-session look identical on a checklist. They have had very different learning experiences.
Portfolio assessment addresses this by making the reasoning visible. When a student can explain why they chose a particular buffer configuration given the specific network characteristics of that session, how that choice affected the musical phrasing in the piece they were rehearsing, and what they would change next time — that is evidence of something real. It is also harder to assess than a timing log, which is probably why most programmes avoid it.
The argument is not that quantitative indicators are useless. It is that they function better as scaffolding for reflective judgement than as the primary evidence of learning. Mixed assessment ecologies — technical logs plus journals plus portfolio syntheses — are more honest about what is actually happening educationally.
What This Does Not Solve
The model described here depends on teaching staff who can facilitate reflective dialogue, curate knowledge across cohorts, and participate in iterative curriculum redesign. That is a specific professional competence that is not automatically present in a conservatoire staffed primarily by performing musicians. The training and support structures needed to develop it are an open question this paper does not fully answer.
The curriculum is also not portable as-is. The RAPP Lab model emerged in a specific institutional context — HfMT Köln, specific partner network, specific funding structure, specific cohort of students. The four outcomes and the general pedagogical logic may transfer; the specific formats will need adaptation. Any institution that tries to implement this without going through at least one cycle of their own iterative development is likely to end up with a checklist version of something that works only when it is a living process.
And the technology keeps moving. LoLa is a mature platform but the ecosystem around it — network configurations, operating system support, hardware lifecycles — changes faster than curriculum documentation. Building responsiveness into the curriculum itself, rather than treating it as a fixed syllabus, is the structural answer. Easier to recommend than to institutionalise.
References
Barrett, H. C. (2007). Researching electronic portfolios and learner engagement. Journal of Adolescent & Adult Literacy, 50(6), 436–449.
Borgdorff, H. (2012). The Conflict of the Faculties. Leiden University Press.
The Design-Based Research Collective (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5–8.
Kapur, M. (2016). Examining productive failure, productive success, unproductive failure, and unproductive success in learning. Educational Psychologist, 51(2), 289–299. https://doi.org/10.1080/00461520.2016.1155457
Lave, J. & Wenger, E. (1991). Situated Learning. Cambridge University Press.
Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179. https://doi.org/10.1080/02602930801956059
Schön, D. A. (1983). The Reflective Practitioner. Basic Books.
Wenger, E. (1998). Communities of Practice. Cambridge University Press. https://doi.org/10.1017/CBO9780511803932
Changelog
- 2026-01-20: Updated the Sadler (2009) reference title to “Indeterminacy in the use of preset criteria for assessment and grading,” matching the journal article at this DOI. Updated the Kapur (2016) reference to the full published title: “Examining productive failure, productive success, unproductive failure, and unproductive success in learning.”