-
Notifications
You must be signed in to change notification settings - Fork 3
/
index.html
10944 lines (9499 loc) · 534 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>OMR-Research</title>
<link rel=stylesheet type="text/css" href="css/OMR-Research.css">
</head>
<body>
<section class="page-header">
<h1>Bibliography on Optical Music Recognition</h1>
<p>Last updated: 01.12.2024</p>
<a href="https://github.com/OMR-Research/omr-research.github.io" class="btn">View on GitHub</a>
<table class="page-header-table" id="navigation-table">
<tr>
<td><a href="index.html" class="btn-light">Sorted by Year</a></td>
<td><a href="omr-research-compact.html" class="btn-light">Sortey by Year (Compact)</a></td>
<td><a href="omr-research-sorted-by-key.html" class="btn-light">Sorted by Key</a> </td>
<td><a href="omr-related-research.html" class="btn-light">Related research</a></td>
<td><a href="omr-research-unverified.html" class="btn-light">Unverified research</a></td>
</tr>
</table>
</section>
<!-- This document was automatically generated with bibtex2html 1.96
(see http://www.lri.fr/~filliatr/bibtex2html/),
with the following command:
BibTeX2HTML/OSX_x86_64/bibtex2html -s omr-style --use-keys --no-keywords --nodoc -d -r -o OMR-Research-Year OMR-Research.bib -->
<table>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="CalvoZaragoza2024">CalvoZaragoza2024</a>]
</td>
<td class="bibtexitem">
Jorge Calvo-Zaragoza, Eliseo Fuentes-Martínez, Noelia Luna-Barahona, and
Antonio Ríos-Vila.
Can multimodal large language models read music score images?
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 4-6, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#CalvoZaragoza2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
<blockquote><font size="-1">
This paper investigates whether multimodal large language models (MLLMs), which combine visual and textual understanding, can effectively read and interpret music score images. Given their ability to process and integrate information from multiple modalities, MLLMs present a promising approach for Optical Music Recognition (OMR). Through empirical evaluation, we demonstrate that while MLLMs exhibit potential in recognizing musical structures, challenges remain in addressing the complexity of music notation. This work highlights the need for further refinements in ML
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Coueasnon2024">Coueasnon2024</a>]
</td>
<td class="bibtexitem">
Bertrand Coüasnon, Mathieu Giraud, Christophe Guillotel Nothmann,
Aurélie Lemaitre, and Philippe Rigaux.
CollabScore project - From Optical Recognition to Multimodal Music
Sources.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 33-37, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#Coueasnon2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Dvorak2024">Dvorak2024</a>]
</td>
<td class="bibtexitem">
Vojtěch Dvořák, Jan jr. Hajič, and Jiří Mayer.
Staff Layout Analysis Using the YOLO Platform.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 18-22, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#Dvorak2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Hartelt2024">Hartelt2024</a>]
</td>
<td class="bibtexitem">
Alexander Hartelt and Frank Puppe.
OMMR4all revisited - a Semiautomatic Online Editor for Medieval
Music Notations.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 46-49, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#Hartelt2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Lambertye2024">Lambertye2024</a>]
</td>
<td class="bibtexitem">
Grégoire de Lambertye and Alexander Pacha.
Semantic Reconstruction of Sheet Music with Graph-Neural Networks.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 12-17, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#Lambertye2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="MenarguezBox2024">MenarguezBox2024</a>]
</td>
<td class="bibtexitem">
Aitana Menárguez-Box, Alejandro H. Tosselli, and Enrique Vidal.
Enhanced User-Machine Interaction for Historical Sheet Music
Retrieval: a Musical Notation Approach.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 28-32, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#MenarguezBox2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Repolusk2024">Repolusk2024</a>]
</td>
<td class="bibtexitem">
Tristan Repolusk and Eduardo Veas.
Semi-Automatic Annotation of Chinese Suzipu Notation Using a
Component-Based Prediction and Similarity Approach.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 38-42, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#Repolusk2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="RiosVila2024">RiosVila2024</a>]
</td>
<td class="bibtexitem">
Antonio Ríos-Vila, Eliseo Fuentes-Martinez, and Jorge Calvo-Zaragoza.
Towards Sheet Music Information Retrieval: A Unified Approach Using
Multitask Transformers.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 7-11, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#RiosVila2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Tirupati2024">Tirupati2024</a>]
</td>
<td class="bibtexitem">
Nivesara Tirupati, Elona Shatri, and György Fazekas.
Crafting Handwritten Notations: Towards Sheet Music Generation.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 50-56, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#Tirupati2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Torras2024">Torras2024</a>]
</td>
<td class="bibtexitem">
Pau Torras, Sanket Biswas, and Alicia Fornés.
On Designing a Representation for the Evaluation of Optical Music
Recognition Systems.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 23-27, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#Torras2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Umbreit2024">Umbreit2024</a>]
</td>
<td class="bibtexitem">
Janosch Umbreit and Silvana Schumann.
OMR on Early Music Sources at the Bavarian State Library with MuRET
- Prototyping, Automating, Scaling.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 6th International Workshop on Reading Music
Systems</em>, pages 43-45, Online, 2024.
[ <a href="OMR-Research-Year_bib.html#Umbreit2024">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2411.15741">DOI</a> |
<a href="https://sites.google.com/view/worms2024/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="AlfaroContreras2023">AlfaroContreras2023</a>]
</td>
<td class="bibtexitem">
María Alfaro-Contreras.
Few-Shot Music Symbol Classification via Self-Supervised Learning and
Nearest Neighbor.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 39-43, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#AlfaroContreras2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Castellanos2023">Castellanos2023</a>]
</td>
<td class="bibtexitem">
Francisco J. Castellanos, Antonio Javier Gallego, and Ichiro Fujinaga.
A Preliminary Study of Few-shot Learning for Layout Analysis of Music
Scores.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 44-48, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#Castellanos2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Fujinaga2023">Fujinaga2023</a>]
</td>
<td class="bibtexitem">
Ichiro Fujinaga and Gabriel Vigliensoni.
Optical Music Recognition Workflow for Medieval Music Manuscripts.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 4-6, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#Fujinaga2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Hajic2023">Hajic2023</a>]
</td>
<td class="bibtexitem">
Jan jr. Hajič, Petr Žabička, Jan Rychtář, Jiří
Mayer, Martina Dvořáková, Filip Jebavý, Markéta
Vlková, and Pavel Pecina.
The OmniOMR Project.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 12-14, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#Hajic2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Hande2023">Hande2023</a>]
</td>
<td class="bibtexitem">
Pranjali Hande, Elona Shatri, Benjamin Timms, and György Fazekas.
Towards Artificially Generated Handwritten Sheet Music Datasets.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 25-30, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#Hande2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Havelka2023">Havelka2023</a>]
</td>
<td class="bibtexitem">
Jonáš Havelka, Jiří Mayer, and Pavel Pecina.
Symbol Generation via Autoencoders for Handwritten Music Synthesis.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 20-24, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#Havelka2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="MartinezSevilla2023">MartinezSevilla2023</a>]
</td>
<td class="bibtexitem">
Juan Carlos Martinez-Sevilla and Francisco J. Castellanos.
Towards Music Notation and Lyrics Alignment: Gregorian Chants as Case
Study.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 15-19, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#MartinezSevilla2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Repolusk2023">Repolusk2023</a>]
</td>
<td class="bibtexitem">
Tristan Repolusk and Eduardo Veas.
The Suzipu Musical Annotation Tool for the Creation of
Machine-Readable Datasets of Ancient Chinese Music.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 7-11, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#Repolusk2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="RiosVila2023">RiosVila2023</a>]
</td>
<td class="bibtexitem">
Antonio Ríos-Vila.
Rotations Are All You Need: A Generic Method For End-To-End Optical
Music Recognition.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 34-38, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#RiosVila2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Zhang2023">Zhang2023</a>]
</td>
<td class="bibtexitem">
Zihui Zhang, Elona Shatri, and György Fazekas.
Improving Sheet Music Recognition using Data Augmentation and Image
Enhancement.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 31-33, Milan, Italy, 2023.
[ <a href="OMR-Research-Year_bib.html#Zhang2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Egozy2022">Egozy2022</a>]
</td>
<td class="bibtexitem">
Eran Egozy and Ian Clester.
Computer-Assisted Measure Detection in a Music Score-Following
Application.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 4th International Workshop on Reading Music
Systems</em>, pages 33-36, Online, 2022.
[ <a href="OMR-Research-Year_bib.html#Egozy2022">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2211.13285">DOI</a> |
<a href="https://sites.google.com/view/worms2022/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="GarridoMunoz2022">GarridoMunoz2022</a>]
</td>
<td class="bibtexitem">
Carlos Garrido-Munoz, Antonio Ríos-Vila, and Jorge Calvo-Zaragoza.
End-to-End Graph Prediction for Optical Music Recognition.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 4th International Workshop on Reading Music
Systems</em>, pages 25-28, Online, 2022.
[ <a href="OMR-Research-Year_bib.html#GarridoMunoz2022">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2211.13285">DOI</a> |
<a href="https://sites.google.com/view/worms2022/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Jacquemard2022">Jacquemard2022</a>]
</td>
<td class="bibtexitem">
Florent Jacquemard, Lydia Rodriguez-de la Nava, and Martin Digard.
Automated Transcription of Electronic Drumkits.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 4th International Workshop on Reading Music
Systems</em>, pages 37-41, Online, 2022.
[ <a href="OMR-Research-Year_bib.html#Jacquemard2022">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2211.13285">DOI</a> |
<a href="https://sites.google.com/view/worms2022/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Mayer2022">Mayer2022</a>]
</td>
<td class="bibtexitem">
Jiří Mayer and Pavel Pecina.
Obstacles with Synthesizing Training Data for OMR.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 4th International Workshop on Reading Music
Systems</em>, pages 15-19, Online, 2022.
[ <a href="OMR-Research-Year_bib.html#Mayer2022">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2211.13285">DOI</a> |
<a href="https://sites.google.com/view/worms2022/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Moss2022">Moss2022</a>]
</td>
<td class="bibtexitem">
Fabian C. Moss, Néstor Nápoles López, Maik Köster, and
David Rizo.
Challenging sources: a new dataset for OMR of diverse 19th-century
music theory examples.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 4th International Workshop on Reading Music
Systems</em>, pages 4-8, Online, 2022.
[ <a href="OMR-Research-Year_bib.html#Moss2022">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2211.13285">DOI</a> |
<a href="https://sites.google.com/view/worms2022/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Penarrubia2022">Penarrubia2022</a>]
</td>
<td class="bibtexitem">
Carlos Penarrubia, Carlos Garrido-Muñoz, Jose J. Valero-Mas, and Jorge
Calvo-Zaragoza.
Efficient Approaches for Notation Assembly in Optical Music
Recognition.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 4th International Workshop on Reading Music
Systems</em>, pages 29-32, Online, 2022.
[ <a href="OMR-Research-Year_bib.html#Penarrubia2022">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2211.13285">DOI</a> |
<a href="https://sites.google.com/view/worms2022/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="RiosVila2022">RiosVila2022</a>]
</td>
<td class="bibtexitem">
Antonio Ríos-Vila, Jose M. Iñesta, and Jorge Calvo-Zaragoza.
End-To-End Full-Page Optical Music Recognition of Monophonic
Documents via Score Unfolding.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 4th International Workshop on Reading Music
Systems</em>, pages 20-24, Online, 2022.
[ <a href="OMR-Research-Year_bib.html#RiosVila2022">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2211.13285">DOI</a> |
<a href="https://sites.google.com/view/worms2022/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Torras2022">Torras2022</a>]
</td>
<td class="bibtexitem">
Pau Torras, Arnau Baró, Lei Kang, and Alicia Fornés.
Improving Handwritten Music Recognition through Language Model
Integration.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 4th International Workshop on Reading Music
Systems</em>, Online, 2022.
[ <a href="OMR-Research-Year_bib.html#Torras2022">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2211.13285">DOI</a> |
<a href="https://sites.google.com/view/worms2022/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Walwadkar2022">Walwadkar2022</a>]
</td>
<td class="bibtexitem">
Dnyanesh Walwadkar, Elona Shatri, Benjamin Timms, and György Fazekas.
CompIdNet: Sheet Music Composer Identification using Deep Neural
Network.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 4th International Workshop on Reading Music
Systems</em>, pages 9-14, Online, 2022.
[ <a href="OMR-Research-Year_bib.html#Walwadkar2022">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2211.13285">DOI</a> |
<a href="https://sites.google.com/view/worms2022/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="AlfaroContreras2021">AlfaroContreras2021</a>]
</td>
<td class="bibtexitem">
María Alfaro-Contreras, Jose J. Valero-Mas, and José Manuel
Iñesta.
Neural architectures for exploiting the components of Agnostic
Notation in Optical Music Recognition.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 33-37, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#AlfaroContreras2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baro2021">Baro2021</a>]
</td>
<td class="bibtexitem">
Arnau Baró, Carles Badal, Pau Torras, and Alicia Fornés.
Handwritten Historical Music Recognition through Sequence-to-Sequence
with Attention Mechanism.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 55-59, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#Baro2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Castellanos2021">Castellanos2021</a>]
</td>
<td class="bibtexitem">
Francisco J. Castellanos and Antonio-Javier Gallego.
Unsupervised Neural Document Analysis for Music Score Images.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 50-54, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#Castellanos2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Fuente2021">Fuente2021</a>]
</td>
<td class="bibtexitem">
Carlos de la Fuente, Jose J. Valero-Mas, Francisco J. Castellanos, and Jorge
Calvo-Zaragoza.
Multimodal Audio and Image Music Transcription.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 18-22, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#Fuente2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Kletz2021">Kletz2021</a>]
</td>
<td class="bibtexitem">
Marc Kletz and Alexander Pacha.
Detecting Staves and Measures in Music Scores with Deep Learning.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 8-12, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#Kletz2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="MasCandela2021">MasCandela2021</a>]
</td>
<td class="bibtexitem">
Enrique Mas-Candela and María Alfaro-Contreras.
Sequential Next-Symbol Prediction for Optical Music Recognition.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 13-17, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#MasCandela2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Pacha2021">Pacha2021</a>]
</td>
<td class="bibtexitem">
Alexander Pacha.
The Challenge of Reconstructing Digits in Music Scores.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 4-7, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#Pacha2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="RiosVila2021">RiosVila2021</a>]
</td>
<td class="bibtexitem">
Antonio Ríos-Vila, David Rizo, Jorge Calvo-Zaragoza, and José Manuel
Iñesta.
Completing Optical Music Recognition with Agnostic Transcription and
Machine Translation.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 28-32, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#RiosVila2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Samiotis2021">Samiotis2021</a>]
</td>
<td class="bibtexitem">
Ioannis Petros Samiotis, Christoph Lofi, and Alessandro Bozzon.
Hybrid Annotation Systems for Music Transcription.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 23-27, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#Samiotis2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Shatri2021">Shatri2021</a>]
</td>
<td class="bibtexitem">
Elona Shatri and György Fazekas.
DoReMi: First glance at a universal OMR dataset.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 43-49, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#Shatri2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Wenzlitschke2021">Wenzlitschke2021</a>]
</td>
<td class="bibtexitem">
Nils Wenzlitschke.
Implementation and evaluation of a neural network for the recognition
of handwritten melodies.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 38-42, Alicante, Spain, 2021.
[ <a href="OMR-Research-Year_bib.html#Wenzlitschke2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="AlfaroContreras2020">AlfaroContreras2020</a>]
</td>
<td class="bibtexitem">
María Alfaro-Contreras, Jorge Calvo-Zaragoza, and José M.
Iñesta.
Reconocimiento holístico de partituras musicales.
Technical report, Departamento de Lenguajes y Sistemas Informáticos,
Universidad de Alicante, Spain, 2020.
[ <a href="OMR-Research-Year_bib.html#AlfaroContreras2020">bib</a> |
<a href="https://rua.ua.es/dspace/bitstream/10045/108270/1/Reconocimiento_holistico_de_partituras_musicales.pdf">.pdf</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Calvo-Zaragoza2020">Calvo-Zaragoza2020</a>]
</td>
<td class="bibtexitem">
Jorge Calvo-Zaragoza, Jan Hajič Jr., and Alexander Pacha.
Understanding Optical Music Recognition.
<em>ACM Comput. Surv.</em>, 53 (4), 2020.
ISSN 0360-0300.
[ <a href="OMR-Research-Year_bib.html#Calvo-Zaragoza2020">bib</a> |
<a href="http://dx.doi.org/10.1145/3397499">DOI</a> |
<a href="https://doi.org/10.1145/3397499">http</a> ]
<blockquote><font size="-1">
For over 50 years, researchers have been trying to teach computers to read music notation, referred to as Optical Music Recognition (OMR). However, this field is still difficult to access for new researchers, especially those without a significant musical background: Few introductory materials are available, and, furthermore, the field has struggled with defining itself and building a shared terminology. In this work, we address these shortcomings by (1) providing a robust definition of OMR and its relationship to related fields, (2) analyzing how OMR inverts the music encoding process to recover the musical notation and the musical semantics from documents, and (3) proposing a taxonomy of OMR, with most notably a novel taxonomy of applications. Additionally, we discuss how deep learning affects modern OMR research, as opposed to the traditional pipeline. Based on this work, the reader should be able to attain a basic understanding of OMR: its objectives, its inherent structure, its relationship to other fields, the state of the art, and the research opportunities it affords.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Castellanos2020">Castellanos2020</a>]
</td>
<td class="bibtexitem">
Francisco J. Castellanos, Antonio-Javier Gallego, and Jorge Calvo-Zaragoza.
Automatic scale estimation for music score images.
<em>Expert Systems with Applications</em>, page 113590, 2020.
ISSN 0957-4174.
[ <a href="OMR-Research-Year_bib.html#Castellanos2020">bib</a> |
<a href="http://dx.doi.org/10.1016/j.eswa.2020.113590">DOI</a> |
<a href="http://www.sciencedirect.com/science/article/pii/S0957417420304140">http</a> ]
<blockquote><font size="-1">
Optical Music Recognition (OMR) is the research field focused on the automatic reading of music from scanned images. Its main goal is to encode the content into a digital and structured format with the advantages that this entails. This discipline is traditionally aligned to a workflow whose first step is the document analysis. This step is responsible of recognizing and detecting different sources of information—e.g. music notes, staff lines and text—to extract them and then processing automatically the content in the following steps of the workflow. One of the most difficult challenges it faces is to provide a generic solution to analyze documents with diverse resolutions. The endless number of existing music sources does not meet a standard that normalizes the data collections, giving complete freedom for a wide variety of image sizes and scales, thereby making this operation unsustainable. In the literature, this question is commonly overlooked and a uniform scale is assumed. In this paper, a machine learning-based approach to estimate the scale of music documents with respect to a reference scale is presented. Our goal is to propose a robust and generalizable method to adapt the input image to the requirements of an OMR system. For this, two goal-directed case studies are included to evaluate the proposed approach over common task within the OMR workflow, comparing the behavior with other state-of-the-art methods. Results suggest that it is necessary to perform this additional step in the first stage of the workflow to correct the scale of the input images. In addition, it is empirically demonstrated that our specialized approach is more promising than image augmentation strategies for the multi-scale challenge.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Elezi2020">Elezi2020</a>]
</td>
<td class="bibtexitem">
Ismail Elezi.
Exploiting Contextual Information with Deep Neural Networks.
mathesis, Ca' Foscari, University of Venice, 2020.
[ <a href="OMR-Research-Year_bib.html#Elezi2020">bib</a> |
<a href="https://arxiv.org/pdf/2006.11706.pdf">.pdf</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Henkel2020">Henkel2020</a>]
</td>
<td class="bibtexitem">
Florian Henkel, Rainer Kelz, and Gerhard Widmer.
Learning to Read and Follow Music in Complete Score Sheet Images.
In <em>Proceedings of the 21st Int. Society for Music Information
Retrieval Conf.</em>, 2020.
[ <a href="OMR-Research-Year_bib.html#Henkel2020">bib</a> |
<a href="https://program.ismir2020.net/poster_6-02.html">.html</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Mico2020">Mico2020</a>]
</td>
<td class="bibtexitem">
Luisa Micó, Jose Oncina, and José M. Iñesta.
Adaptively Learning to Recognize Symbols in Handwritten Early Music.
In Peggy Cellier and Kurt Driessens, editors, <em>Machine Learning
and Knowledge Discovery in Databases</em>, pages 470-477, Cham, 2020. Springer
International Publishing.
ISBN 978-3-030-43887-6.
[ <a href="OMR-Research-Year_bib.html#Mico2020">bib</a> |
<a href="http://dx.doi.org/10.1007/978-3-030-43887-6_40">DOI</a> ]
<blockquote><font size="-1">
Human supervision is necessary for a correct edition and publication of handwritten early music collections. The output of an optical music recognition system for that kind of documents may contain a significant number of errors, making it tedious to correct for a human expert. An adequate strategy is needed to optimize the human feedback information during the correction stage to adapt the classifier to the specificities of each manuscript. In this paper, we compare the performance of a neural system, difficult and slow to be retrained, and a nearest neighbor strategy, based on the neural codes provided by a neural net, trained offline, used as a feature extractor.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="MuNG">MuNG</a>]
</td>
<td class="bibtexitem">
Alexander Pacha and Jan Hajič jr.
The Music Notation Graph (MuNG) Repository.
<a href="https://github.com/OMR-Research/mung">https://github.com/OMR-Research/mung</a>, 2020.
[ <a href="OMR-Research-Year_bib.html#MuNG">bib</a> |
<a href="https://github.com/OMR-Research/mung">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Tardon2020">Tardon2020</a>]
</td>
<td class="bibtexitem">
Lorenzo J. Tardón, Isabel Barbancho, Ana M. Barbancho, and Ichiro
Fujinaga.
Automatic Staff Reconstruction within SIMSSA Project.
<em>Applied Sciences</em>, 10 (7): 2468-2484, 2020.
[ <a href="OMR-Research-Year_bib.html#Tardon2020">bib</a> |
<a href="http://dx.doi.org/10.3390/app10072468">DOI</a> |
<a href="https://www.mdpi.com/2076-3417/10/7/2468">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Tsai2020">Tsai2020</a>]
</td>
<td class="bibtexitem">
Timothy J. Tsai, Daniel Yang, Mengyi Shan, Thitaree Tanprasert, and Teerapat
Jenrungrot.
Using Cell Phone Pictures of Sheet Music To Retrieve MIDI Passages.
<em>IEEE Transactions on Multimedia</em>, pages 1-13, 2020.
[ <a href="OMR-Research-Year_bib.html#Tsai2020">bib</a> |
<a href="http://dx.doi.org/10.1109/TMM.2020.2973831">DOI</a> |
<a href="https://arxiv.org/abs/2004.11724">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Tuggener2020">Tuggener2020</a>]
</td>
<td class="bibtexitem">
Lukas Tuggener, Yvan Putra Satyawan, Alexander Pacha, Jürgen Schmidhuber,
and Thilo Stadelmann.
The DeepScoresV2 Dataset and Benchmark for Music Object Detection.
In <em>Proceedings of the 25th International Conference on Pattern
Recognition</em>, Milan, Italy, 2020.
[ <a href="OMR-Research-Year_bib.html#Tuggener2020">bib</a> |
<a href="http://dx.doi.org/10.21256/zhaw-20647">DOI</a> ]
<blockquote><font size="-1">
In this paper, we present DeepScoresV2, an extended version of the DeepScores dataset for optical music recognition (OMR). We improve upon the original DeepScores dataset by providing much more detailed annotations, namely (a) annotations for 135 classes including fundamental symbols of non-fixed size and shape, increasing the number of annotated symbols by 23%; (b) oriented bounding boxes; (c) higher-level rhythm and pitch information (onset beat for all symbols and line position for noteheads); and (d) a compatibility mode for easy use in conjunction with the MUSCIMA++ dataset for OMR on handwritten documents. These additions open up the potential for future advancement in OMR research. Additionally, we release two state-of-the-art baselines for DeepScoresV2 based on Faster R-CNN and the Deep Watershed Detector. An analysis of the baselines shows that regular orthogonal bounding boxes are unsuitable for objects which are long, small, and potentially rotated, such as ties and beams, which demonstrates the need for detection algorithms that naturally incorporate object angles.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Wick2020">Wick2020</a>]
</td>
<td class="bibtexitem">
Christoph Wick and Frank Puppe.
Automatic Neume Transcription of Medieval Music Manuscripts using
CNN/LSTM-Networks and the segmentation-free CTC-Algorithm.
Technical report, University of Würzburg, 2020.
[ <a href="OMR-Research-Year_bib.html#Wick2020">bib</a> |
<a href="http://dx.doi.org/10.20944/preprints202001.0149.v1">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Miro2019">Miro2019</a>]
</td>
<td class="bibtexitem">
Jordi Burgués Miró.
Recognition of musical symbols in scores using neural networks.
Master's thesis, Universitat Politècnica de Catalunya, Barcelona,
June 2019.
[ <a href="OMR-Research-Year_bib.html#Miro2019">bib</a> |
<a href="http://hdl.handle.net/2117/165583">http</a> ]
<blockquote><font size="-1">