Mots clés
2017 |
Fober, Dominique; Orlarey, Yann; Letz, Stéphane INScore Time Model (Inproceeding) Proceedings of the International Computer Music Conference, pp. 64–68, 2017. (Abstract | Links | BibTeX | Étiquettes: dynamic score, inscore, music score, time model) @inproceedings{fober17c,
title = {INScore Time Model}, author = {Dominique Fober and Yann Orlarey and Stéphane Letz}, url = {inscore-icmc17-final.pdf}, year = {2017}, date = {2017-10-16}, booktitle = {Proceedings of the International Computer Music Conference}, pages = {64–68}, abstract = {INScore is an environment for augmented interactive music score design, oriented towards unconventional uses of music notation, without excluding conventional approaches. In this environment, although all the objects of a score have a temporal dimension, the time remains fixed i.e., the date (or duration) of an object does not change, except when a message is received (sent from an external application or resulting from events handling). Thus, INScore does not include a time manager in the classic sense of the term. This choice was based on the fact that the system was originally designed to be used with sound production software (e.g., Max/MSP, Pure Data), that have more strict real-time constraints than INScore’s graphical environment. However, the need to introduce dynamic time has gradually emerged, leading to an original model, both continuous and event based. The paper presents this model and its properties in the frame on INScore.}, keywords = {dynamic score, inscore, music score, time model}, pubstate = {published}, tppubtype = {inproceedings} } INScore is an environment for augmented interactive music score design, oriented towards unconventional uses of music notation, without excluding conventional approaches. In this environment, although all the objects of a score have a temporal dimension, the time remains fixed i.e., the date (or duration) of an object does not change, except when a message is received (sent from an external application or resulting from events handling). Thus, INScore does not include a time manager in the classic sense of the term. This choice was based on the fact that the system was originally designed to be used with sound production software (e.g., Max/MSP, Pure Data), that have more strict real-time constraints than INScore’s graphical environment. However, the need to introduce dynamic time has gradually emerged, leading to an original model, both continuous and event based. The paper presents this model and its properties in the frame on INScore.
|
2016 |
Lepetit-Aimon, Gabriel; Fober, Dominique; Orlarey, Yann; Letz, Stéphane INScore expressions to compose symbolic scores (Inproceeding) Hoadley, Richard; Nash, Chris; Fober, Dominique (Ed.): Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR2016, pp. 137–143, Anglia Ruskin University, Cambridge, UK, 2016, ISBN: 978-0-9931461-1-4. (Abstract | Links | BibTeX | Étiquettes: inscore, interaction, music score, score composition) @inproceedings{Lepetit-Aimon_tenor2016,
title = {INScore expressions to compose symbolic scores}, author = {Gabriel Lepetit-Aimon and Dominique Fober and Yann Orlarey and Stéphane Letz}, editor = {Richard Hoadley and Chris Nash and Dominique Fober}, url = {inscore-tenor-2016.pdf}, isbn = {978-0-9931461-1-4}, year = {2016}, date = {2016-01-01}, booktitle = {Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR2016}, pages = {137–143}, publisher = {Anglia Ruskin University}, address = {Cambridge, UK}, abstract = {INScore is an environment for the design of augmented interactive music scores turned to non-conventional use of music notation. The environment allows arbitrary graphic resources to be used and composed for the music representation. It supports symbolic music notation, described us- ing Guido Music Notation or MusicXML formats. The environment has been extended to provided score level com- position using a set of operators that consistently take scores as arguments to compute new scores as output. INScore API supports now score expressions both at OSC and at scripting levels. The work is based on a previous research that solved the issues of the notation consistency across scores composition. This paper focuses on the language level and explains the different strategies to evaluate score expressions.}, keywords = {inscore, interaction, music score, score composition}, pubstate = {published}, tppubtype = {inproceedings} } INScore is an environment for the design of augmented interactive music scores turned to non-conventional use of music notation. The environment allows arbitrary graphic resources to be used and composed for the music representation. It supports symbolic music notation, described us- ing Guido Music Notation or MusicXML formats. The environment has been extended to provided score level com- position using a set of operators that consistently take scores as arguments to compute new scores as output. INScore API supports now score expressions both at OSC and at scripting levels. The work is based on a previous research that solved the issues of the notation consistency across scores composition. This paper focuses on the language level and explains the different strategies to evaluate score expressions.
|
2014 |
Fober, Dominique; Orlarey, Yann; Letz, Stéphane Augmented Interactive Scores for Music Creation (Inproceeding) Proceedings of Korean Electro-Acoustic Music Society’s 2014 Annual Conference [KEAMSAC2014], pp. 85–91, 2014. (Abstract | Links | BibTeX | Étiquettes: interaction, music score) @inproceedings{fober14c,
title = {Augmented Interactive Scores for Music Creation}, author = {Dominique Fober and Yann Orlarey and Stéphane Letz}, url = {fober-keamsac-2014-final.pdf}, year = {2014}, date = {2014-10-09}, booktitle = {Proceedings of Korean Electro-Acoustic Music Society’s 2014 Annual Conference [KEAMSAC2014]}, pages = {85–91}, abstract = {This article addresses music representation issues in the context of the contemporary music creation and performance. It exposes the main challenges in terms of music notation and representation, in regard of the new forms of music and with an emphasis on interactive music issues. It presents INScore, an environment for the design of augmented, interactive music scores that has been developed in response to current artistic evolutions. It gives an overview of the system with a presentation of its main features and highlights on the main technologies involved. Concrete examples of use with recent music creations, composers\’ viewpoint and an electro-acoustic piece modeling will also be given.}, keywords = {interaction, music score}, pubstate = {published}, tppubtype = {inproceedings} } This article addresses music representation issues in the context of the contemporary music creation and performance. It exposes the main challenges in terms of music notation and representation, in regard of the new forms of music and with an emphasis on interactive music issues. It presents INScore, an environment for the design of augmented, interactive music scores that has been developed in response to current artistic evolutions. It gives an overview of the system with a presentation of its main features and highlights on the main technologies involved. Concrete examples of use with recent music creations, composers’ viewpoint and an electro-acoustic piece modeling will also be given.
|
Letz, Stéphane; Orlarey, Yann; Fober, Dominique Spécification de l’extension LibAudioStream. (Technical Report) 2014. (Abstract | Links | BibTeX | Étiquettes: music score, processus, visualisation) @techreport{letz14a,
title = {Spécification de l’extension LibAudioStream.}, author = {Stéphane Letz and Yann Orlarey and Dominique Fober}, editor = {Grame}, url = {l32a-libaudiostream.pdf}, year = {2014}, date = {2014-03-24}, abstract = {LibAudioStream est un moteur de rendu audionumérique disponible sous forme de librairie, permettant de manipuler des ressources audio à travers le concept de flux. Ce moteur est facilement intégrable dans des applications qui nécessitent de jouer des fichiers son, des montages audio, et d’appliquer des effets et des traitements DSP en temps-réel. Une algèbre de description, de composition et de transformation des flux audio permet de construire des expressions complexes, que l’on va pouvoir ordonnancer à des dates précises dans le futur, et dont le rendu sera ensuite exécuté en temps réel le moment venu. L’application a un contrôle à l’échantillon près de ce qui est fait et à quel moment, tout en étant déchargée des calculs audio temps réel proprement dit. Ce rapport donne une description formelle du langage d’expressions et des fonctionnalités de LibAudioStream.}, keywords = {music score, processus, visualisation}, pubstate = {published}, tppubtype = {techreport} } LibAudioStream est un moteur de rendu audionumérique disponible sous forme de librairie, permettant de manipuler des ressources audio à travers le concept de flux. Ce moteur est facilement intégrable dans des applications qui nécessitent de jouer des fichiers son, des montages audio, et d’appliquer des effets et des traitements DSP en temps-réel. Une algèbre de description, de composition et de transformation des flux audio permet de construire des expressions complexes, que l’on va pouvoir ordonnancer à des dates précises dans le futur, et dont le rendu sera ensuite exécuté en temps réel le moment venu. L’application a un contrôle à l’échantillon près de ce qui est fait et à quel moment, tout en étant déchargée des calculs audio temps réel proprement dit. Ce rapport donne une description formelle du langage d’expressions et des fonctionnalités de LibAudioStream.
|
2013 |
Fober, Dominique Caractérisation et représentation des processus musicaux. (Technical Report) 2013. (Abstract | Links | BibTeX | Étiquettes: music score, processus, visualisation) @techreport{fober13c,
title = {Caractérisation et représentation des processus musicaux.}, author = {Dominique Fober}, editor = {Grame}, url = {inedit-l2.1.a.pdf}, year = {2013}, date = {2013-01-01}, abstract = {Ce rapport présente une étude préliminaire réalisée dans le cadre du projet ANR INEDIT. L’objectif est de définir les éléments de caractérisation des processus musicaux et la maniére de les représenter, pouvant servir notamment à une représentation des processus musicaux au sein de la partition musicale.}, keywords = {music score, processus, visualisation}, pubstate = {published}, tppubtype = {techreport} } Ce rapport présente une étude préliminaire réalisée dans le cadre du projet ANR INEDIT. L’objectif est de définir les éléments de caractérisation des processus musicaux et la maniére de les représenter, pouvant servir notamment à une représentation des processus musicaux au sein de la partition musicale.
|
2012 |
Fober, Dominique; Orlarey, Yann; Letz, Stéphane SCORES LEVEL COMPOSITION BASED ON THE GUIDO MUSIC NOTATION (Inproceeding) ICMA, (Ed.): Proceedings of the International Computer Music Conference, pp. 383–386, 2012. (Abstract | Links | BibTeX | Étiquettes: composition, music score) @inproceedings{fober12b,
title = {SCORES LEVEL COMPOSITION BASED ON THE GUIDO MUSIC NOTATION}, author = {Dominique Fober and Yann Orlarey and Stéphane Letz}, editor = {ICMA}, url = {icmc12-fober.pdf}, year = {2012}, date = {2012-09-13}, booktitle = {Proceedings of the International Computer Music Conference}, pages = {383–386}, abstract = {Based on the Guido Music Notation format, we have developed tools for music score ”composition”, i.e. operators that take scores both as target and arguments of high level transformations, applicable for example to the time domain (e.g. cutting the head or the tail of a score) or to the structural domains (e.g. putting scores in sequence or in parallel). Providing these operations at score level is particularly convenient to express music ideas and to compose these ideas in an homogeneous representation space. However, scores level composition gives raise to a set of issues related to the music notation consistency. This paper introduces the GUIDO Music Notation format, presents the score com- position operations, the notation issues and a proposal to solve them.}, keywords = {composition, music score}, pubstate = {published}, tppubtype = {inproceedings} } Based on the Guido Music Notation format, we have developed tools for music score ”composition”, i.e. operators that take scores both as target and arguments of high level transformations, applicable for example to the time domain (e.g. cutting the head or the tail of a score) or to the structural domains (e.g. putting scores in sequence or in parallel). Providing these operations at score level is particularly convenient to express music ideas and to compose these ideas in an homogeneous representation space. However, scores level composition gives raise to a set of issues related to the music notation consistency. This paper introduces the GUIDO Music Notation format, presents the score com- position operations, the notation issues and a proposal to solve them.
|
Fober, Dominique; Pachet, François; Kilian, Jürgen Real-Time Score Notation from Raw MIDI Inputs (Technical Report) Grame 2012. (Abstract | Links | BibTeX | Étiquettes: MIDI, music score, real-time) @techreport{fober12c,
title = {Real-Time Score Notation from Raw MIDI Inputs}, author = {Dominique Fober and François Pachet and Jürgen Kilian}, editor = {Grame}, url = {TR-120407.pdf}, year = {2012}, date = {2012-04-07}, institution = {Grame}, abstract = {This paper describes tools designed and experiments conducted in the context of MIROR, a European project investigating adaptive systems for early childhood music education based on the paradigm of reflexive interaction. In MIROR, music notation is used as the trace of both the user and the system activity, produced from MIDI instruments. The task of displaying such raw MIDI inputs and outputs is difficult as no a priori information is known concerning the underlying tempo or metrical structure. We describe here a completely automatic processing chain from the raw MIDI input to a fully-fledge music notation. The low level music description is first converted in a score level description and then automatically rendered as a graphic score. The whole process is operating in real-time. The paper describes the various conversion steps and issues, including extensions to support score annotations. The process is validated using about 30,000 musical sequences gathered from MIROR experiments and made available for public use.}, keywords = {MIDI, music score, real-time}, pubstate = {published}, tppubtype = {techreport} } This paper describes tools designed and experiments conducted in the context of MIROR, a European project investigating adaptive systems for early childhood music education based on the paradigm of reflexive interaction. In MIROR, music notation is used as the trace of both the user and the system activity, produced from MIDI instruments. The task of displaying such raw MIDI inputs and outputs is difficult as no a priori information is known concerning the underlying tempo or metrical structure. We describe here a completely automatic processing chain from the raw MIDI input to a fully-fledge music notation. The low level music description is first converted in a score level description and then automatically rendered as a graphic score. The whole process is operating in real-time. The paper describes the various conversion steps and issues, including extensions to support score annotations. The process is validated using about 30,000 musical sequences gathered from MIROR experiments and made available for public use.
|
Fober, Dominique; Bevilacqua, Frederic; Assous, Roland Segments and Mapping for Scores and Signal Representations (Technical Report) GRAME 2012. (Abstract | Links | BibTeX | Étiquettes: gesture, music, music score, synchronization) @techreport{fober12tr,
title = {Segments and Mapping for Scores and Signal Representations}, author = {Dominique Fober and Frederic Bevilacqua and Roland Assous}, url = {TR-segmentation.pdf}, year = {2012}, date = {2012-01-01}, institution = {GRAME}, abstract = {We present a general theoretical framework to de- scribe segments and the different possible mapping that can be established between them. Each segment can be related to different music representations, graphical scores, music signals or gesture signals. This theoretical formalism is general and is compatible with large number of problems found in sound and gesture computing. We describe some examples we developed in interactive score representation, superposed with signal representation, and the description of synchronization between gesture and sound signals.}, keywords = {gesture, music, music score, synchronization}, pubstate = {published}, tppubtype = {techreport} } We present a general theoretical framework to de- scribe segments and the different possible mapping that can be established between them. Each segment can be related to different music representations, graphical scores, music signals or gesture signals. This theoretical formalism is general and is compatible with large number of problems found in sound and gesture computing. We describe some examples we developed in interactive score representation, superposed with signal representation, and the description of synchronization between gesture and sound signals.
|
2011 |
Fober, Dominique; Orlarey, Yann; Letz, Stephane INScore An Environment for the Design of Live Music Scores (Journal Article) Audio-graphic Modeling and Interaction Workshop at NIME 2011, 2011. (Abstract | Links | BibTeX | Étiquettes: inscore, interaction, music score) @article{ fober11a ,
title = {INScore An Environment for the Design of Live Music Scores}, author = {Dominique Fober and Yann Orlarey and Stephane Letz}, url = {inscore-nime2011.pdf}, year = {2011}, date = {2011-01-01}, journal = {Audio-graphic Modeling and Interaction Workshop at NIME 2011}, abstract = {INScore is an open source framework for the design of interactive, augmented, live music score. An augmented score is a graphic space providing representation, composition and manipulation of heterogeneous and arbitrary music objects (music scores but also images, text, signals…), both in the graphic and time domains. INScore provides a dynamic system for the representation of the music performance, considered as a specific sound or gesture instance of the score, and viewed as signals. It integrates an event based inter- action mechanism that opens the door to original uses and designs, transforming a score as a user interface or allowing a score self modification based on temporal events.}, keywords = {inscore, interaction, music score}, pubstate = {published}, tppubtype = {article} } INScore is an open source framework for the design of interactive, augmented, live music score. An augmented score is a graphic space providing representation, composition and manipulation of heterogeneous and arbitrary music objects (music scores but also images, text, signals…), both in the graphic and time domains. INScore provides a dynamic system for the representation of the music performance, considered as a specific sound or gesture instance of the score, and viewed as signals. It integrates an event based inter- action mechanism that opens the door to original uses and designs, transforming a score as a user interface or allowing a score self modification based on temporal events.
|