<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Reflective-Practice on Sebastian Spicker</title>
    <link>https://sebastianspicker.github.io/tags/reflective-practice/</link>
    <description>Recent content in Reflective-Practice on Sebastian Spicker</description>
    
    <generator>Hugo -- 0.160.0</generator>
    <language>en</language>
    <lastBuildDate>Fri, 22 Nov 2024 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://sebastianspicker.github.io/tags/reflective-practice/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>After the Connection Is Stable, the Hard Part Begins</title>
      <link>https://sebastianspicker.github.io/posts/nmp-curriculum-reflective-practice/</link>
      <pubDate>Fri, 22 Nov 2024 00:00:00 +0000</pubDate>
      <guid>https://sebastianspicker.github.io/posts/nmp-curriculum-reflective-practice/</guid>
      <description>A third post in the networked music performance series. Technical latency is solved. Institutional infrastructure has a name. What students actually learn — and what conservatoire curricula consistently get wrong about teaching it — turns out to be a different problem entirely.</description>
      <content:encoded><![CDATA[<p><em>Third post in a series. The <a href="/posts/nmp-latency-lola-mvtp/">August 2023 post</a>
covered latency measurements across six European research-network links.
The <a href="/posts/digital-music-labs-infrastructure/">June 2024 post</a> covered
what institutional infrastructure needs to look like for any of that to
be sustainably usable. This one covers what happens after both of those
problems are solved — which is when the genuinely interesting educational
challenges start.</em></p>
<p><em>Based on a manuscript with colleagues from the RAPP Lab. Not yet peer-reviewed.</em></p>
<hr>
<h2 id="the-gap-nobody-talks-about">The Gap Nobody Talks About</h2>
<p>There is a version of the NMP success story that stops too early. It goes: we
installed LoLa, measured the latency, it came in at 9.5 ms to Vienna, the
musicians played together across 745 km, it worked. Success.</p>
<p>What this story skips is the classroom after the demo. The student who can
follow a setup checklist perfectly and still has no idea what to do musically
when the connection is stable. The ensemble that gets a clean signal running
and then plays exactly the same repertoire in exactly the same way they would
in a co-present rehearsal, fighting the latency instead of working with it,
frustrated when it does not feel right. The assessment rubric that checks off
&ldquo;maintained stable connection&rdquo; and &ldquo;completed the performance&rdquo; and has nothing
to say about everything that actually constitutes musical learning in a
networked context.</p>
<p>The gap between <em>technical feasibility</em> and <em>educational transformation</em> is
the subject of this post. Closing it turns out to require a different kind of
curriculum design than most conservatoires have tried.</p>
<hr>
<h2 id="what-gets-taught-versus-what-needs-to-be-learned">What Gets Taught Versus What Needs to Be Learned</h2>
<p>The default curricular response to NMP has been to treat it as a technical
skill with an artistic application. Students learn to configure an audio
interface, manage routing, establish a LoLa connection, and then — implicitly
— go do music. The technical content gets staged as a prerequisite to the
&ldquo;real&rdquo; work.</p>
<p>This ordering is wrong in a specific way. Technical setup work is genuinely
necessary, but making it a prerequisite treats the relationship between
technology and musical practice as sequential rather than recursive. In
practice, the interesting musical problems only become visible <em>through</em> the
technical ones. A student does not understand why buffer size matters until
they have felt the difference between a 5 ms and a 40 ms offset in a
coordination-intensive passage. A student does not develop an opinion about
audio routing configurations until they have experienced a rehearsal collapse
caused by a routing error they could have prevented.</p>
<p>The RAPP Lab&rsquo;s recurring insight across several years of module iterations
at HfMT Köln was more direct: once learners can establish a stable connection,
the harder challenge is developing artistic, collaborative and reflective
strategies for making music <em>together apart</em>. Technical fluency is a
foundation, not a destination.</p>
<hr>
<h2 id="the-curriculum-we-ended-up-with">The Curriculum We Ended Up With</h2>
<p>It took several cycles to get there. The early format was weekend workshops —
open, exploratory, no formal assessment, primarily for advanced students who
self-selected in. These were useful precisely because they were informal: they
revealed quickly how technical and musical questions become inextricable once
you are actually playing, and they gave us evidence about where students got
stuck that we would not have found from a needs analysis.</p>
<p>Over time, elements of those workshops were developed into recurring
curriculum-embedded modules, which then fed into independent study projects
and eventually into external collaborations and performances. The trajectory
mattered: moving from a one-off event to something longitudinal meant that
knowledge built across cohorts rather than resetting every time.</p>
<p>The module structure that emerged has three interlocking elements:</p>
<p><strong>Progressive task design.</strong> Early sessions are tightly scoped:
specific technical-musical exercises, limited repertoire, well-defined
success criteria. Later sessions move toward open-ended projects, student-led
rehearsal planning, and eventually cross-institutional partnerships where
variables are genuinely outside anyone&rsquo;s control. The point of the early
constraints is not to make things easier — it is to create conditions where
students can notice what they are doing rather than just surviving.</p>
<p><strong>Journals and debriefs.</strong> Students kept individual reflective journals
throughout modules, documenting not just what happened but how they responded
to it — technical problems, musical decisions, moments of coordination failure
and recovery, questions they could not answer at the time. Group debriefs
after each rehearsal then turned those individual threads into collective
knowledge: comparing strategies, naming the problems that came up repeatedly,
developing shared language for rehearsal coordination.</p>
<p>The debrief is the part of this model that I think gets undervalued. It is
not just reflection — it is <em>curriculum production</em>. Strategies that emerged
from one cohort&rsquo;s debriefs became documented starting points for subsequent
cohorts. Knowledge accumulated rather than evaporating when the semester ended.</p>
<p><strong>Portfolio assessment.</strong> Rather than assessing primarily on a final
performance, students assembled portfolios that could include curated journal
excerpts, rehearsal documentation, reflective syntheses, and accounts of
how their thinking changed. The question being assessed was not &ldquo;did you play
the concert&rdquo; but &ldquo;can you articulate why you made the decisions you made, and
what you would do differently.&rdquo;</p>
<hr>
<h2 id="what-students-actually-learn-when-the-curriculum-works">What Students Actually Learn (When the Curriculum Works)</h2>
<p>Four outcomes recurred across the RAPP Lab iterations, consistently enough
to be worth naming:</p>
<h3 id="1-technical-agency">1. Technical agency</h3>
<p>This is different from technical competence. Competence means you can follow
a procedure. Agency means you understand the procedure well enough to deviate
from it intelligently when something goes wrong — to diagnose what failed,
generate a hypothesis about why, and try something different.</p>
<p>The shift happened when students stopped treating technical problems as
interruptions to the music and started treating them as information about
the system they were working inside. A dropout is not just an annoyance; it
is evidence about where the failure occurred. Getting to that reframe took,
on average, several weeks of structured reflection. It did not happen from
reading documentation.</p>
<h3 id="2-adaptive-improvisation">2. Adaptive improvisation</h3>
<p>Latency changes what real-time musical coordination can mean. You cannot rely
on the same multimodal cues — breath, gesture, shared acoustics — that make
co-present ensemble playing feel intuitive. You have to develop explicit
cueing systems, turn-taking conventions, contingency plans for when the
connection degrades mid-performance.</p>
<p>What we observed was that this constraint generated a specific kind of
musical creativity. Students improvised not just with musical material but
with rehearsal organisation itself — inventing systems, testing them,
discarding the ones that did not work, documenting the ones that did. Some of
the most musically interesting moments in the modules came from sessions where
the technology was behaving badly and students had to make it work anyway.</p>
<p>There is research on &ldquo;productive failure&rdquo; — deliberately designing tasks that
exceed students&rsquo; current control, because the struggle and recovery produces
deeper learning than smooth execution (Kapur 2016). NMP turns out to be a
natural context for this, not by design but because the network does not
cooperate on schedule.</p>
<h3 id="3-collaborative-communication">3. Collaborative communication</h3>
<p>Co-present rehearsal relies heavily on implicit communication: the
physical space makes many things legible without anyone having to say them.
In a networked rehearsal, the spatial and gestural channel is degraded or
absent. Students had to make explicit what would normally be implicit —
articulating coordination strategies, naming the problems they were
experiencing rather than hoping the ensemble would notice, developing a
vocabulary for talking about timing and latency as musical parameters.</p>
<p>This turned out to generalize. Students who had worked through several
networked rehearsal cycles were noticeably better at explicit musical
communication in co-present settings too, because they had been forced to
develop the vocabulary in a context where it was necessary.</p>
<h3 id="4-reflective-identity">4. Reflective identity</h3>
<p>The students who got the most out of the modules were the ones who stopped
waiting for the conditions to improve and started working with the conditions
as they were. Latency as a compositional constraint rather than a defect to
be routed around. Uncertainty as an artistic condition rather than a
technical failure.</p>
<p>The journal entries where this shift is most visible are not the ones that
describe what the student did. They are the ones that describe a change in
how the student understands their own practice — who they are as a musician
in relation to an environment they cannot fully control. That is a different
kind of outcome than anything a timing metric captures.</p>
<hr>
<h2 id="the-assessment-problem">The Assessment Problem</h2>
<p>The hardest part of all of this to translate into institutional language is
assessment. The conservatoire has well-developed frameworks for evaluating
performances. It has much weaker frameworks for evaluating the learning that
happens before and between and underneath performances.</p>
<p>Checklist rubrics — was the connection stable, was the latency within
acceptable range, did the performance complete — are useful for safety and
reliability. They are poor evidence for whether a student has developed the
capacity to work reflectively and artistically in a mediated ensemble
environment. A student who achieved a stable connection by following
instructions exactly and a student who achieved it by diagnosing a routing
error mid-session look identical on a checklist. They have had very different
learning experiences.</p>
<p>Portfolio assessment addresses this by making the reasoning visible. When a
student can explain why they chose a particular buffer configuration given
the specific network characteristics of that session, how that choice affected
the musical phrasing in the piece they were rehearsing, and what they would
change next time — that is evidence of something real. It is also harder to
assess than a timing log, which is probably why most programmes avoid it.</p>
<p>The argument is not that quantitative indicators are useless. It is that
they function better as scaffolding for reflective judgement than as the
primary evidence of learning. Mixed assessment ecologies — technical logs
plus journals plus portfolio syntheses — are more honest about what is
actually happening educationally.</p>
<hr>
<h2 id="what-this-does-not-solve">What This Does Not Solve</h2>
<p>The model described here depends on teaching staff who can facilitate
reflective dialogue, curate knowledge across cohorts, and participate in
iterative curriculum redesign. That is a specific professional competence
that is not automatically present in a conservatoire staffed primarily by
performing musicians. The training and support structures needed to develop
it are an open question this paper does not fully answer.</p>
<p>The curriculum is also not portable as-is. The RAPP Lab model emerged in a
specific institutional context — HfMT Köln, specific partner network,
specific funding structure, specific cohort of students. The four outcomes
and the general pedagogical logic may transfer; the specific formats will
need adaptation. Any institution that tries to implement this without going
through at least one cycle of their own iterative development is likely to
end up with a checklist version of something that works only when it is a
living process.</p>
<p>And the technology keeps moving. LoLa is a mature platform but the
ecosystem around it — network configurations, operating system support,
hardware lifecycles — changes faster than curriculum documentation. Building
responsiveness into the curriculum itself, rather than treating it as a fixed
syllabus, is the structural answer. Easier to recommend than to institutionalise.</p>
<hr>
<h2 id="references">References</h2>
<p>Barrett, H. C. (2007). Researching electronic portfolios and learner
engagement. <em>Journal of Adolescent &amp; Adult Literacy</em>, 50(6), 436–449.</p>
<p>Borgdorff, H. (2012). <em>The Conflict of the Faculties.</em> Leiden University Press.</p>
<p>The Design-Based Research Collective (2003). Design-based research: An
emerging paradigm for educational inquiry. <em>Educational Researcher</em>, 32(1),
5–8.</p>
<p>Kapur, M. (2016). Examining productive failure, productive success,
unproductive failure, and unproductive success in learning. <em>Educational
Psychologist</em>, 51(2), 289–299. <a href="https://doi.org/10.1080/00461520.2016.1155457">https://doi.org/10.1080/00461520.2016.1155457</a></p>
<p>Lave, J. &amp; Wenger, E. (1991). <em>Situated Learning.</em> Cambridge University Press.</p>
<p>Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for
assessment and grading. <em>Assessment &amp; Evaluation in Higher Education</em>,
34(2), 159–179. <a href="https://doi.org/10.1080/02602930801956059">https://doi.org/10.1080/02602930801956059</a></p>
<p>Schön, D. A. (1983). <em>The Reflective Practitioner.</em> Basic Books.</p>
<p>Wenger, E. (1998). <em>Communities of Practice.</em> Cambridge University Press.
<a href="https://doi.org/10.1017/CBO9780511803932">https://doi.org/10.1017/CBO9780511803932</a></p>
<hr>
<h2 id="changelog">Changelog</h2>
<ul>
<li><strong>2026-01-20</strong>: Updated the Sadler (2009) reference title to &ldquo;Indeterminacy in the use of preset criteria for assessment and grading,&rdquo; matching the journal article at this DOI. Updated the Kapur (2016) reference to the full published title: &ldquo;Examining productive failure, productive success, unproductive failure, and unproductive success in learning.&rdquo;</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>They Told Me Not to Use Design Thinking. They Were Right.</title>
      <link>https://sebastianspicker.github.io/posts/design-thinking-vs-grounded-theory/</link>
      <pubDate>Tue, 23 Nov 2021 00:00:00 +0000</pubDate>
      <guid>https://sebastianspicker.github.io/posts/design-thinking-vs-grounded-theory/</guid>
      <description>When you are a physicist doing education research, methodology feels like a bureaucratic formality standing between you and the interesting work. Everyone told me to use grounded theory instead of design thinking in my thesis. I ignored them. This is the postmortem.</description>
      <content:encoded><![CDATA[<p><em>A follow-up to the <a href="/posts/mission-to-mars/">Mission to Mars</a> post, which
describes the experimental work. This one is about the methodology layer
underneath it — specifically, what I got wrong.</em></p>
<hr>
<h2 id="the-setup">The Setup</h2>
<p>My background is in physics. I ended up in physics education research
sideways, through the astro-lab project and through a genuine interest in
why students find physics so alienating and what might help. When it came
time to frame that work as a thesis, I had to choose a methodology.</p>
<p>I chose design thinking. Or more precisely, I chose something that
borrowed heavily from design-based research and design thinking frameworks
and that felt, at the time, like the obvious match for what I was doing.
I was designing experiments. I was iterating on them. I was testing them
with students and refining them. Design thinking is a framework for
exactly this process. What could be more natural?</p>
<p>Several people told me I was making a mistake. Colleagues with more
qualitative research experience, a supervisor who had been through
the methodology debates in education research more times than he wanted
to count. The consistent advice was: use grounded theory. Be systematic
about your data. Let the categories emerge from what you actually observe
rather than from what you designed the experiment to produce.</p>
<p>I thought I understood what they were saying. I did not understand what
they were saying.</p>
<hr>
<h2 id="what-i-thought-design-thinking-gave-me">What I Thought Design Thinking Gave Me</h2>
<p>Design thinking, as a research framing, offered what felt like a clean
correspondence between method and subject matter. The thing I was
producing was a designed artifact — a teaching experiment. The process
I was following was inherently iterative: run it, observe what happens,
revise, run it again. The framework had a vocabulary for this (empathise,
define, ideate, prototype, test) that matched my actual working process.</p>
<p>Design-based research, the academic version of this approach in education,
has a real literature behind it. It is used in educational technology
research and in curriculum development. It is not a made-up category. The
argument for it is reasonable: if you are trying to design effective
educational interventions, then designing and studying those interventions
at the same time is a coherent research strategy.</p>
<p>What I told myself was: I am doing design-based research. The methodology
matches the work. The thesis will describe the design process, the
rationale for each design decision, the iterative refinements, and the
evidence that the final design works. This is a contribution to knowledge
because it produces a principled, evidence-informed design that other
practitioners can use and adapt.</p>
<p>This is not wrong. But it is not enough for a thesis. And I only
understood why it is not enough after I had spent considerable time
trying to make it be enough.</p>
<hr>
<h2 id="the-reckoning-in-the-methodology-chapter">The Reckoning in the Methodology Chapter</h2>
<p>The methodology chapter of a thesis is where you have to be explicit
about the epistemological status of your claims. You are not just
describing what you did. You are explaining why the thing you did counts
as knowledge production, what kind of knowledge it produces, and how
someone else could evaluate whether you did it correctly.</p>
<p>This is where design thinking started to come apart.</p>
<p><strong>What kind of claim does a design study make?</strong> The honest answer is:
it makes a claim about this design, in these contexts, with these
students. It does not easily generalise beyond that. If I show that
the Mission to Mars experiment produces measurable improvements in
students&rsquo; understanding of air pressure in a student lab context at
the University of Cologne in 2019, the implication for other teachers
in other contexts is&hellip; unclear. The design worked here. Maybe it
will work for you. Good luck.</p>
<p>A thesis contribution needs to be something more transferable than that.
It needs to produce knowledge about a phenomenon, not just knowledge
about a specific designed object. &ldquo;Here is a well-designed experiment&rdquo;
is a practitioner contribution, which is genuinely valuable, but it is
not the same as a theoretical contribution to the field.</p>
<p><strong>The iteration problem.</strong> Design thinking celebrates iterative
refinement. But in a thesis, every iteration needs to be motivated by
evidence, and the nature of the evidence and how it maps onto the
design changes needs to be made explicit. If I changed something between
version 1 and version 2 of the experiment, the methodology chapter must
explain: what data told me to make that change? How did I analyse it?
What coding framework did I apply? What alternative changes did I
consider and rule out, and on what grounds?</p>
<p>Design thinking has no systematic answer to these questions. It has
process descriptions (&ldquo;we tested with users and gathered feedback&rdquo;) but
not research methodology answers (&ldquo;I applied open coding to the think-aloud
protocols and the following categories emerged, which pointed toward
this specific revision&rdquo;). Without that precision, the &ldquo;iteration&rdquo; in
the methodology chapter looks like: I tried it, it did not quite work,
I made it better. Which is honest but not a researchable process.</p>
<p><strong>The validation problem.</strong> Design-based research often validates its
designs against the criteria that motivated the design. I designed the
experiment to address specific student misconceptions about air pressure.
I then tested whether students who did the experiment had fewer of those
misconceptions afterward. If the answer is yes, the design is validated.</p>
<p>But this is circular in a way that becomes visible under examination.
The misconceptions I targeted were the ones I identified at the start.
The students I studied were the ones who came to my lab. The measurement
instrument I used was one I designed to detect the specific changes
I expected the design to produce. The whole system is oriented toward
confirming the design rather than discovering something about the
phenomenon.</p>
<p>Grounded theory cuts this loop. You start with the data — the
students&rsquo; actual responses, their misconceptions as they express them,
the things that confuse them that you did not anticipate — and you
build categories from the bottom up. What you end up with is a theory
of how students actually think about air pressure (or whatever the topic
is), which may or may not match what you assumed when you designed the
experiment. The cases where it does not match are precisely where the
theoretical contribution lives.</p>
<hr>
<h2 id="what-grounded-theory-would-have-required">What Grounded Theory Would Have Required</h2>
<p>Grounded theory, done properly, is laborious. The Glaserian version
(open coding, theoretical sampling until saturation, constant
comparative method) requires treating every interview, every observation,
every student response as a data source to be systematically analysed,
compared, and connected into a coherent theory.</p>
<p>Theoretical sampling means you do not decide in advance how many students
to study or what contexts to observe. You keep gathering data until new
cases stop producing new categories — until the theory is saturated.
This is methodologically sound and practically painful, because you
cannot know in advance when you will be done.</p>
<p>Memoing — writing ongoing analytical notes about the emerging categories
and their relationships — is a discipline that forces you to be explicit
about your reasoning at every step. Not just &ldquo;these two responses seem
similar&rdquo; but &ldquo;these two responses are similar because both students are
treating pressure as a property of moving air, and here is how that
connects to the misconception documented by [citation].&rdquo;</p>
<p>I did not want to do this. I wanted to design experiments. Grounded
theory felt like a detour from the thing I was actually interested in.</p>
<p>The advice I received was: this is not a detour. A systematic analysis
of what students think about air pressure, and how they think about it,
and what experiences shift their thinking, is a theoretical contribution
that would make the experiments more useful to everyone — not just a
record of experiments that worked in one lab in one city in one year.</p>
<p>They were right about this.</p>
<hr>
<h2 id="what-i-actually-learned-too-late-to-use-in-the-thesis">What I Actually Learned (Too Late to Use in the Thesis)</h2>
<p>The most useful student responses in the Mission to Mars experiment
were not the ones that confirmed the design was working. They were the
unexpected ones.</p>
<p>The PVC pipe failure — the moment when the lid pops off and students
hear the sound — was included because I thought it would demonstrate the
direction of pressure force in a visceral way. What I observed, which
I noted but did not systematically analyse, was that different students
interpreted the pop differently. Some immediately understood it as the
internal air pushing out. Others interpreted it as the external vacuum
pulling the lid. A few were unsure which way the force had been directed
even after the event.</p>
<p>A grounded theory analysis of those responses would have produced
something genuinely interesting: a typology of how students process
a demonstrable physical event when it conflicts with their existing
pressure intuitions. That typology would have been transferable to
other experimental contexts, other pressure scenarios, other situations
where students encounter the vacuum-suction confusion.</p>
<p>Instead I noted it, described it qualitatively, and moved on because
it was not what the design was optimised to produce.</p>
<p>That is the design thinking trap. You are so focused on the designed
outcome that you treat unexpected observations as noise rather than as
data. Grounded theory treats them as the most valuable data you have.</p>
<hr>
<h2 id="a-note-for-other-physicists-entering-education-research">A Note for Other Physicists Entering Education Research</h2>
<p>If you are coming from a natural science background and you are starting
work in education research, the methodology question will feel foreign
at first. In physics, methodology is largely a matter of technical
choice — which instrument, which statistical test, which model. The
epistemological questions (what kind of knowledge does this produce?
how does it generalise?) are handled by the experimental framework
itself, which is a known, shared, peer-reviewed practice.</p>
<p>In qualitative education research, those questions are not handled in
advance. You have to work them out explicitly, for your specific study,
in writing. This is uncomfortable for people trained in a tradition where
you do the experiment and then write up what happened.</p>
<p>The temptation, for a physicist, is to choose a methodology that feels
like a framework for doing things rather than one that feels like a
framework for thinking about what you found. Design thinking is a
framework for doing things. Grounded theory is a framework for thinking
about what you found.</p>
<p>Both are legitimate. But a thesis needs to make a theoretical contribution,
and theoretical contributions come from systematic analysis of phenomena,
not from documentation of designed objects.</p>
<p>I would have finished faster and understood more if I had done the
uncomfortable thing from the start.</p>
<hr>
<p><em>The experimental work this post is commenting on is described in
<a href="/posts/mission-to-mars/">Mission to Mars</a>. For a more successful later
use of qualitative methodology in a related context, see
<a href="/posts/ai-transcription-grounded-theory/">AI Transcription and Grounded Theory</a>.</em></p>
<hr>
<h2 id="references">References</h2>
<p>Glaser, B. G., &amp; Strauss, A. L. (1967). <em>The Discovery of Grounded
Theory: Strategies for Qualitative Research.</em> Aldine.</p>
<p>Strauss, A., &amp; Corbin, J. (1998). <em>Basics of Qualitative Research:
Techniques and Procedures for Developing Grounded Theory</em> (2nd ed.).
SAGE Publications.</p>
<p>The Design-Based Research Collective (2003). Design-based research: An
emerging paradigm for educational inquiry. <em>Educational Researcher</em>,
32(1), 5–8. <a href="https://doi.org/10.3102/0013189X032001005">https://doi.org/10.3102/0013189X032001005</a></p>
<p>Brown, T. (2008). Design thinking. <em>Harvard Business Review</em>, 86(6),
84–92.</p>
]]></content:encoded>
    </item>
  </channel>
</rss>
