Abstract
Twenty-two children, 5 through 9 years old, were tested in structured and unstructured contexts (software environments) to determine their ability to demonstrate contrasts in tempo and dynamics using a synthesizer keyboard. EZ Vision sequencing software provided the unstructured environment—students heard exactly what they played when they depressed keys. Instant Pleasure software produced a harmonized version of “Twinkle, Twinkle, Little Star” regardless of which keys were played. Under both environments, children could control tempo and dynamics of individual events. Tempo was defined as the average duration in milliseconds of notes played. Dynamics was defined as “MIDI on” data, ranging from 1-127. Each child demonstrated four single discriminations (fast, slow, loud, and soft) and four double discriminations (fast/loud, fast/soft, slow/loud, and slow/soft) in each of the two environments, resulting in a total of 16 performances. These performances were recorded via MIDI, and data were prepared for subsequent statistical analyses. Results showed that the children were able to demonstrate contrasts in loudness (dynamics) and duration (tempo) by playing a synthesizer keyboard. The more structured environment (Instant Pleasure) had the effect of producing louder performances and a greater range of total playing times across performance demands than did the less structured environment (EZ Vision). There was no clear evidence that the double-discrimination demands were obviously more difficult than were the single-discrimination demands except that when given a double-discrimination task, children played significantly louder than when making a single discrimination. Children consistently paired slower (longer durations) with softer performances even when both conditions were not required.
Get full access to this article
View all access options for this article.
