Project Deliverables
-
D1.1: Project Handbook and Quality Plan, April 2016.
Download here.
License: CC-BY 4.0.
-
D1.2: Draft Data Management Plan, July 2016.
Download here.
License: CC-BY 4.0.
-
D1.3: Second Draft Data Management Plan, July 2017.
Download here.
License: CC-BY 4.0.
-
D1.4: Data Management Plan, January 2019.
Download here.
License: CC-BY 4.0.
-
D2.1: Requirements Report and Use Cases, June 2016.
Download here.
License: CC-BY 4.0.
-
D2.2: Draft Ontology Specification, November 2016.
Download here.
License: CC-BY 4.0.
-
D2.3: Final Ontology Specification, July 2017.
Download here.
License: CC-BY 4.0.
-
D2.4: API specification, May 2017.
Download here.
License: CC-BY 4.0.
-
D2.5: Service integration technologies, May 2017.
Download here.
License: CC-BY 4.0.
-
D2.6: Service integration draft guidelines, May 2017.
Download here.
License: CC-BY 4.0.
-
D2.7: Service integration guidelines, January 2019.
Download here.
License: CC-BY 4.0.
-
D2.8: Ontology evaluation report, January 2019.
Download here.
License: CC-BY 4.0.
-
D3.1: Report on Rights Management requirements, July 2016.
Download here.
License: CC-BY 4.0.
-
D3.2: Report on usage of Creative Commons licenses, November 2016.
Download here.
License: CC-BY 4.0.
-
D3.3: Guidelines for including new actors in the ACE, May 2017.
Download here.
License: CC-BY 4.0.
-
D3.4: Report on business models emerging from the ACE, July 2017.
Download here.
License: CC-BY 4.0.
-
D3.5: Evaluation of the business models emerging from the ACE, January 2019.
Download here.
License: CC-BY 4.0.
-
D4.10: Evaluation report on the seconf prototype tool for the automatic semantic description of music samples, December 2018.
Download here.
License: CC-BY 4.0.
-
D4.11: Evaluation report on the second prototype tool for the automatic semantic description of music pieces, December 2018.
Download here.
License: CC-BY 4.0.
-
D4.12: Release of the tool for the automatic semantic description of music samples, January 2019.
Download here.
License: CC-BY 4.0.
-
D4.13: Release of tool for the automatic semantic description of music pieces, January 2019.
Download here.
License: CC-BY 4.0.
-
D4.1: Report on the analysis and compilation of state-of-the-art methods for the automatic annotation of music pieces and music samples, July 2016.
Download here.
License: CC-BY 4.0.
-
D4.2: First prototype tool for the automatic semantic description of music samples, May 2017.
Download here.
License: CC-BY 4.0.
-
D4.3: First prototype tool for the automatic semantic description of music pieces, May 2017.
Download here.
License: CC-BY 4.0.
-
D4.4: Evaluation report on the first prototype tool for the automatic semantic description of music samples, July 2017.
Download here.
License: CC-BY 4.0.
-
D4.5: Evaluation report on the first prototype tool for the automatic semantic description of music pieces, July 2017.
Download here.
License: CC-BY 4.0.
-
D4.6: Release of tool for the manual annotation of musical content, September 2017.
Download here.
License: CC-BY 4.0.
-
D4.7: Second prototype tool for the automatic semantic description of music samples, July 2018.
Download here.
License: CC-BY 4.0.
-
D4.8: Second prototype tool for the automatic semantic description of music pieces, July 2018.
Download here.
License: CC-BY 4.0.
-
D4.9: Evaluation report on the tool for manual annotation of musical content, July 2018.
Download here.
License: CC-BY 4.0.
-
D5.1: Hierarchical ontology of timbral semantic descriptors, August 2016.
Download here.
License: CC-BY 4.0.
-
D5.2: First prototype of timbral characterisation tool for semantically annotating non-musical content, May 2017.
Download here.
License: CC-BY 4.0.
-
D5.3: Evaluation report on the first prototypes of the timbral characterisation tools, July 2017.
Download here.
License: CC-BY 4.0.
-
D5.4: Release of tool for the manual annotation of non-musical content, April 2018.
Download here.
License: CC-BY 4.0.
-
D5.5: Evaluation report on the tool for manual annotation of non-musical content, July 2018.
Download here.
License: CC-BY 4.0.
-
D5.6: Second prototype of timbral characterisation tool for semantically annotating non-musical content, July 2018.
Download here.
License: CC-BY 4.0.
-
D5.7: Evaluation report on the second prototype of the timbral characterisation tools, December 2018.
Download here.
License: CC-BY 4.0.
-
D5.8: Release of timbral characterisation tools for semantically annotating non-musical content, January 2019.
Download here.
License: CC-BY 4.0.
-
D6.12: Report on the evaluation of the ACE from a holistic and technological perspective, January 2019.
Download here.
License: CC-BY 4.0.
-
D6.8: Report on novel methods for measuring creativity support, July 2018.
Download here.
License: CC-BY 4.0.
-
D7.1: Project Website, April 2016.
Download here.
License: CC-BY 4.0.
-
D7.2: Visual Identity of Audio Commons, April 2016.
Download here.
License: CC-BY 4.0.
-
D7.3: Dissemination Plan, January 2017.
Download here.
License: CC-BY 4.0.
-
D7.7: Report on dissemination and publication of results, January 2019.
Download here.
License: CC-BY 4.0.
Published Papers
2019 (3)
-
Estefanía Cano, Derry FitzGerald, Antoine Liutkus, Mark D. Plumbley and Fabian-Robert Stöter
(2019).
Musical Source Separation: An Introduction, published in "IEEE Signal Processing Magazine ".
Full details and access here.
-
Ferraro, A., Bogdanov D., Serra X.
(2019).
Skip prediction using boosting trees based on acoustic feature of tracks in sessions, published in "Proc. of the 12th ACM International Conference on Web Search and Data Mining, 2019 WSDM Cup Workshop".
Full details and access here.
-
Pearce, A., Brookes, T., Mason, R.
(2019).
Modelling Timbral Hardness, published in "Journal of Applied Sciences".
Full details and access here.
2018 (34)
-
Bogdanov, D., Porter A., Urbano J., Schreiber H.
(2018).
The MediaEval 2018 AcousticBrainz Genre Task: Content-based Music Genre Recognition from Multiple Sources, published in "MediaEval Workshop".
Full details and access here.
-
Ceriani, M., Fazekas, G.
(2018).
Audio Commons Ontology: A Data Model for an Audio Content Ecosystem, published in "Proc. of the 17th International Semantic Web Conference (ISWC)".
Full details and access here.
-
Choi, K., Fazekas, G., Sandler, M., Cho, K.
(2018).
The Effects of Noisy Labels on Deep Convolutional Neural Networks for Music Tagging, published in "IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 2, No. 2".
Full details and access here.
-
Choi, K., Fazekas, G., Sandler, M., Cho, K.
(2018).
A Comparison of Audio Signal Preprocessing Methods for Deep Neural Networks on Music Tagging, published in "Proc. of the 26th European Signal Processing Conference (EUSIPCO)".
Full details and access here.
-
Choobbasti, A., Gholamian, M., Vaheb, A., and Safavi, S.
(2018).
JSPEECH: A Multi-lingual conversational speech corpus, published in "Proc. of the Speech and Language Technology Workshop (SLT)".
Full details and access here.
-
Favory, X., Fonseca E., Font F., Serra X.
(2018).
Facilitating the Manual Annotation of Sounds When Using Large Taxonomies, published in "Proc. of the International Workshop on Semantic Audio and the Internet of Things (ISAI), in IEEE FRUCT Conference".
Full details and access here.
-
Favory, X., Serra, X.
(2018).
Multi Web Audio Sequencer: Collaborative Music Making, published in "Proc. of the Web Audio Conference (WAC)".
Full details and access here.
-
Ferraro, A., Bogdanov D., Choi K., Serra X.
(2018).
Using offline metrics and user behavior analysis to combine multiple systems for music recommendation, published in "Proc. of the Conference on Recommender Systems (RecSys), REVEAL Workshop".
Full details and access here.
-
Ferraro, A., Bogdanov D., Yoon J., Kim K. S., Serra X.
(2018).
Automatic playlist continuation using a hybrid recommender system combining features from text and audio, published in "Proc. of the Conference on Recommender Systems (RecSys), Workshop on the RecSys Challenge".
Full details and access here.
-
Fonseca, E., Gong R., & Serra X.
(2018).
A Simple Fusion of Deep and Shallow Learning for Acoustic Scene Classification, published in "Proc. of the Sound and Music Computing Conference".
Full details and access here.
-
Fonseca, E., Plakal M., Font F., Ellis D. P. W., Favory X., Pons J., Serra X.
(2018).
General-purpose Tagging of Freesound Audio with AudioSet Labels: Task Description, Dataset, and Baseline, published in "Proc. of the Detection and Classification of Acoustic Scenes and Events Workshop (DCASE)".
Full details and access here.
-
Liang, B., Fazekas, G., Sandler, M.
(2018).
Measurement, Recognition and Visualisation of Piano Pedalling Gestures and Techniques, published in "Journal of the AES, Vol. 66, Issue 2".
Full details and access here.
-
Milo, A., Barthet, M., Fazekas, G.
(2018).
The Audio Commons Initiative, published in "Proc. of the Digital Music Research Network (DMRN)".
Full details and access here.
-
Oramas, S., Bogdanov D., & Porter A.
(2018).
MediaEval 2018 AcousticBrainz Genre Task: A baseline combining deep feature embeddings across datasets, published in "MediaEval Workshop".
Full details and access here.
-
Pauwels, J., Xambó, A., Roma, G., Barthet, M., György Fazekas
(2018).
Exploring Real-time Visualisations to Support Chord Learning with a Large Music Collection, published in "Proc. of the Web Audio Conference (WAC)".
Full details and access here.
-
Safavi, S., Pearce, A., Wang, W., Plumbley, M.
(2018).
Predicting the perceived level of reverberation using machine learning, published in "Proc. of the Asilomar Conference on Signals, Systems, & Computers".
Full details and access here.
-
Safavi, S., Wang, W., Plumbley, M., Choobbasti, A., and Fazekas, G.
(2018).
Predicting the Perceived Level of Reverberation using Features from Nonlinear Auditory Model, published in "Proc. of the International Workshop on Semantic Audio and the Internet of Things (ISAI), in IEEE FRUCT Conference".
Full details and access here.
-
Sheng, D., Fazekas, G.
(2018).
Feature Design Using Audio Decomposition for Intelligent Control of the Dynamic Range Compressor, published in "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)".
Full details and access here.
-
Skach, S., Xambó, A., Turchet, L., Stolfi, A., Stewart, R., Barthet, M.
(2018).
Embodied Interactions with E-Textiles and the Internet of Sounds for Performing Arts, published in "Proc.of the 12th International Conference on Tangible, Embedded, and Embodied Interaction".
Full details and access here.
-
Stolfi, A., Milo, A., Ceriani, M., Barthet, M.
(2018).
Participatory musical improvisations with Playsound.space, published in "Proc. of the Web Audio Conference (WAC)".
Full details and access here.
-
Stolfi, A., Milo, Viola, F., A., Ceriani, M., Barthet, M.
(2018).
Playsound.space: Inclusive Free Music Improvisations Using Audio Commons, published in "Proc. of the New Interfaces for Musical Expression (NIME)".
Full details and access here.
-
Stolfi, A., Sokolovskis, J. , Gorodscy, F. , Iazzetta, F., Barthet, M.
(2018).
Audio Semantics: Online Chat Communication in Open Band Participatory Music Performances, published in "Journal of the Audio Engineering Society".
Full details and access here.
-
Turchet, L., Barthet, M.
(2018).
Demo of interactions between a performer playing a Smart Mandolin and audience members using Musical Haptic Wearables, published in "Proc. of the New Interfaces for Musical Expression (NIME)".
Full details and access here.
-
Turchet, L., Barthet, M.
(2018).
Towards a Semantic Architecture for the Internet of Musical Things, published in "Proc. of the 23rd FRUCT Conference".
Full details and access here.
-
Turchet, L., Barthet, M.
(2018).
Jamming with a smart mandolin and Freesound, published in "Proc. of the 23rd FRUCT Conference".
Full details and access here.
-
Turchet, L., Barthet, M.
(2018).
Ubiquitous Musical Activities with Smart Musical Instruments, published in "Proc. of the Workshop on Ubiquitous Music (UBIMUS)".
Full details and access here.
-
Turchet, L., Barthet, M.
(2018).
Co-design of Musical Haptic Wearables for Electronic Music Performer’s Communication, published in "IEEE Transactions on Human-Machine Systems".
Full details and access here.
-
Turchet, L., Barthet, M.
(2018).
Internet of Musical Things: Vision and Challenges, published in "IEEE Access".
Full details and access here.
-
Turchet, L., McPherson, A., Barthet, M.
(2018).
Real-time hit classification in a Smart Cajón, published in "Frontiers in ICT".
Full details and access here.
-
Turchet, L., McPherson, A., Barthet, M.
(2018).
Co-design of a Smart Cajón, published in "Journal of the Audio Engineering Society".
Full details and access here.
-
Vaheb, A., Choobbasti, A., Mortazavi, S., and Safavi, S.
(2018).
Investigating Language Variability on the Performance of Speaker Verification Systems, published in "Proc. of the 21st International Conference on Speech and Computer (SPECOM)".
Full details and access here.
-
Viola, F., Stolfi, A., Milo, A., Ceriani, C., Barthet, M.
(2018).
Playsound.space: enhancing a live performance tool with semantic recommendations, published in "Proc. of the Workshop on Semantic Applications for Audio and Music (SAAM)".
Full details and access here.
-
Xambó, A., Pauwels, J., Roma, G., Barthet, M., György Fazekas
(2018).
Jam with Jamendo: Querying a Large Music Collection by Chords from a Learner’s Perspective, published in "Proc. of the 13th International Audio Mostly Conference".
Full details and access here.
-
Xambó, A., Roma, G., Lerch, A., Barthet, M., Fazekas, G.
(2018).
Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases, published in "Proc. of the New Interfaces for Musical Expression (NIME)".
Full details and access here.
2017 (15)
-
Bogdanov D., Serra X.
(2017).
Quantifying music trends and facts using editorial metadata from the Discogs database, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR)".
Full details and access here.
-
Bogdanov, D., Porter A., Urbano J., Schreiber H.
(2017).
The MediaEval 2017 AcousticBrainz Genre Task: Content-based Music Genre Recognition from Multiple Sources, published in "MediaEval Workshop".
Full details and access here.
-
Choi, K., Fazekas, G., Sandler, M., Cho, K.
(2017).
Convolutional Recurrent Neural Networks for Music Classification, published in "Proc. of the 42nd IEEE International Conference on Acoustics".
Full details and access here.
-
Choi, K., Fazekas, G., Sandler, M., Cho, K.
(2017).
Transfer Learning for Music Classification and Regression Tasks, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR)".
Full details and access here.
-
Fonseca, E., Gong R., Bogdanov D., Slizovskaia O., Gomez E., Serra, X.
(2017).
Acoustic Scene Classification by Ensembling Gradient Boosting Machine and Convolutional Neural Networks, published in "Proc. of the Detection and Classification of Acoustic Scenes and Events Workshop (DCASE)".
Full details and access here.
-
Fonseca, E., Pons J., Favory X., Font F., Bogdanov D., Ferraro A., Oramas S., Porter A., Serra X.
(2017).
Freesound Datasets: A Platform for the Creation of Open Audio Datasets, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR)".
Full details and access here.
-
Font, F., Bandiera G.
(2017).
Freesound Explorer: Make Music While Discovering Freesound!, published in "Proc. of the Web Audio Conference (WAC)".
Full details and access here.
-
Herremans, D., Yang, S., Chuan, C. H., Barthet, M., & Chew, E.
(2017).
IMMA-Emo: A Multimodal Interface for Visualising Score-and Audio-synchronised Emotion Annotations, published in "Proc. of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences".
Full details and access here.
-
Liang, B., Fazekas, G., Sandler, M.
(2017).
Recognition of Piano Pedalling Techniques Using Gesture Data, published in "Proc. of the ACM 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences".
Full details and access here.
-
Page, K., Bechhofer, S., Fazekas, G., Weigl, D., Wilmering, T.
(2017).
Realising a Layered Digital Library: Exploration and Analysis of the Live Music Archive through Linked Data, published in "Proc. of the ACM/IEEE Joint Conference on Digital Libraries (JCDL)".
Full details and access here.
-
Pauwels, J., O'Hanlon, K., Fazekas, G., Sandler, M.
(2017).
Exploring Confidence Measures and Their Application in Music Labelling Systems Based on Hidden Markov Models, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR)".
Full details and access here.
-
Pearce, A., Brookes, T., Mason, R.
(2017).
Timbral attributes for sound effect library searching, published in "Proc. of the Audio Engineering Society Conference on Semantic Audio".
Full details and access here.
-
Stolfi, A., Barthet, M., Goródscy, F., de Carvalho Junior, A. D.
(2017).
Open Band: A Platform for Collective Sound Dialogues, published in "Proc. of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences".
Full details and access here.
-
Subramaniam, A., Barthet, M.
(2017).
Mood Visualiser: Augmented Music Visualisation Gauging Audience Arousal, published in "Proc. of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences".
Full details and access here.
-
Wilmering, T., Thalmann, F., Fazekas, G., Sandler, M.
(2017).
Bridging Fan Communities and Facilitating Access to Music Archives Through Semantic Audio Applications, published in "Proc. of the 143st Convention of the Audio Engineering Society".
Full details and access here.
2016 (11)
-
Allik, A., Fazekas, G., Sandler, M.
(2016).
An Ontology for Audio Features, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR)".
Full details and access here.
-
Allik, A., Fazekas, G., Sandler, M.
(2016).
Ontological Representation of Audio Features, published in "Proc. of the 15th International Semantic Web Conference (ISWC)".
Full details and access here.
-
Bogdanov, D., Porter, A., Herrera, P., Serra, X.
(2016).
Cross-collection evaluation for music classification tasks, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR)".
Full details and access here.
-
Buccoli, M., Zanoni, M., Fazekas, G., Sarti A., Sandler, M.
(2016).
A Higher-Dimensional Expansion of Affective Norms for English Terms for Music Tagging, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR)".
Full details and access here.
-
Choi, K., Fazekas, G., Sandler, M.
(2016).
Automatic Tagging Using Deep Convolutional Neural Networks, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR)".
Full details and access here.
-
Choi, K., Fazekas, G., Sandler, M.
(2016).
Towards Playlist Generation Algorithms Using RNNs Trained on Within-Track Transitions, published in "Proc. of the User Modeling, Adaptation and Personalization Conference (UMAP), Workshop on Surprise, Opposition, and Obstruction in Adaptive and Personalized Systems (SOAP)".
Full details and access here.
-
Font, F., Brookes, T., Fazekas, G., Guerber, M., La Burthe, A., Plans, A., Plumbley, M. D., Shaashua, M., Wang, W., Serra, X.
(2016).
Audio Commons: bringing Creative Commons audio content to the creative industries, published in "Proc. of the 61st AES Conference on Audio for Games".
Full details and access here.
-
Font, F., Serra, X.
(2016).
Tempo Estimation for Music Loops and a Simple Confidence Measure, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR)".
Full details and access here.
-
Juric D., Fazekas, G.
(2016).
Knowledge Extraction from Audio Content Service Providers’ API Descriptions, published in "Proc. of the 10th International Conference on Metadata and Semantics Research (MTSR)".
Full details and access here.
-
Porter, A., Bogdanov, D., Serra, X.
(2016).
Mining metadata from the web for AcousticBrainz, published in "Proc. of the 3rd International Digital Libraries for Musicology workshop".
Full details and access here.
-
Wilmering, T., Fazekas, G., Sandler, M.
(2016).
AUFX-O: Novel Methods for the Representation of Audio Processing Workflows, published in "Proc. of the 15th International Semantic Web Conference (ISWC)".
Full details and access here.
2015 (1)
-
Font, F., Serra, X.
(2015).
The Audio Commons Initiative, published in "Proc. of the International Society for Music Information Retrieval Conference (ISMIR, late-breaking demo)".
Full details and access here.
Other Materials
-
Audio Commons generic presentation slides, February 2016.
Download here.
License: CC0.
-
Audio Commons Description of Action (initial proposal), February 2016.
Download here.
License: CC-BY.
-
Audio Commons web site source code repository, February 2016.
Check out the code repository here.
License: GNU General Public License 3.0.
-
Logo and visual identity code repository, February 2016.
Check out the code repository here.
You'll find exports of the logo in different formats as well as vector source files, fonts and guidelines.
License: CC0.
-
Audio Commons logos, February 2016.
Download here.
Audio Commons logo and icon in horizontal and vertical layouts and in png and svg formats.
License: CC0.