From GNUstepWiki
Jump to navigation Jump to search

object-oriented software system for building music, sound, signal processing, and MIDI applications.

Current Version: V5.5.2


The MusicKit is an object-oriented software system for building music, sound, signal processing, and MIDI applications. It has been used in such diverse commercial applications as music sequencers, computer games, and document processors. Professors and students in academia have used the MusicKit in a host of areas, including music performance, scientific experiments, computer-aided instruction, and physical modeling. The MusicKit was the first to unify the MIDI and Music V paradigms, thus combining interaction with generality (Music V, written by Max Mathews and others at Bell Labs four decades ago, was the first widely available "computer music compiler").

The NeXT MusicKit was first demonstrated at the 1988 NeXT product introduction and was bundled in NeXT software releases 1.0 and 2.0. Beginning with NeXT's 3.0 release, the MusicKit was no longer part of the standard NeXT software release. Instead, it was being distributed and supported as Version 4.0 by the Center for Computer Research in Music and Acoustics (CCRMA) of Stanford University. Versions 5.0 to 5.4.1 were then supported by tomandandy music, porting to several more popular operating systems.

The MusicKit Distribution is a comprehensive package that includes on-line documentation, programming examples, utilities, applications and sample score documents. The MusicKit is dependent on the SndKit distribution, originally written by Stephen Brandon, and both Framework collections are available at the same distribution site. The SndKit was written to be a complete open source implementation of NeXTs SoundKit. The re-write started and almost finished before the SoundKit itself was released in source code form.

Source code is available for everything, with the exception of the NeXT hardware implementation of the low-level sound and DSP drivers. This means researchers and developers may study the source or even customize the Music Kit and DSP tools to suit their needs. Enhancements can be committed to the CVS repository to have them incorporated for future releases. Commercial software developers may freely incorporate and adapt the software to accelerate development of software products.


  • Applicable to composers writing real-time computer music applications.
  • Applicable to programmers writing cross-platform audio/music applications.
  • Extensible, high-level object-oriented frameworks that are a super-set of Music V and MIDI paradigms.
  • Written in Objective C and C, using Apple's OpenStep/Cocoa API, the FoundationKit.
  • Using the Python to Objective C bridge PyObjC enables applications and utilities to be written in Python, an interpreted object-oriented language.
  • Functionally comparable (although architecturally dissimilar) to JMSL (Java Music Specification Language).
  • Representation system capable of depicting phrase-level structure such as legato transitions.
  • General time management/scheduling mechanism, supporting synchronization to MIDI time code.
  • Efficient real-time synthesis and sound processing, including option for quadraphonic sound.
  • Complete support for multiple MIDI inputs and outputs.
  • Fully-dynamic DSP resource allocation system with dynamic linking and loading, on multiple DSPs.
  • Digital sound I/O from the DSP port with support for serial port devices by all popular vendors.
  • Non-real time mode, where the DSP returns data to the application or writes a sound file.
  • Suite of applications, including Ensemble ― an interactive algorithmic composition and performance environment (including a built-in sampler), and ScorePlayer ― a Scorefile and standard MIDI file player.
  • Library of instruments, including FM, wavetable, physical modeling and waveshaping synthesis.
  • Library of unit generators for synthesis and sound processing.
  • Documentation, programming examples, utilities, including a sound file mixer, sample rate converter, etc.
  • ScoreFile, a textual scripting language for music.
  • Connectable audio processing modules (“plugins”) including standard audio effects such as reverb.
  • Sound data held in a specifiable variety of formats, i.e 8, 16, 24 bit or floating point. Allows trading off sample data size vs. processing time.
  • MP3 file reading and writing. Decoding of MP3 can be done in a background thread after reading, or on-the-fly during playback, allowing selection of memory consumption versus processor load.
  • MP3 and Ogg/Vorbis streaming of audio output to web servers using the libshout library. The libshout library license is LGPL, not GPL and so do not compromise the MusicKit license


Leigh M. Smith

Related Links