-
Notifications
You must be signed in to change notification settings - Fork 1k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Signed-off-by: Michele Dolfi <[email protected]>
- Loading branch information
1 parent
7686083
commit 12e3419
Showing
5 changed files
with
93 additions
and
0 deletions.
There are no files selected for viewing
19 changes: 19 additions & 0 deletions
19
tests/data/groundtruth/docling_v2/2305.03393v1-pg9-img.doctags.txt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
<document> | ||
<text><location><page_1><loc_22><loc_81><loc_79><loc_85></location>order to compute the TED score. Inference timing results for all experiments were obtained from the same machine On single core with AMD EPYC 7763 CPU @2.45 GHz.</text> | ||
<section_header_level_1><location><page_1><loc_22><loc_77><loc_52><loc_79></location>5.1 Hyper Parameter Optimization</section_header_level_1> | ||
<text><location><page_1><loc_22><loc_68><loc_79><loc_77></location>We have chosen the PubIabNet data set to perform HPO, since it includes à highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans) Results are presented in Table. It is evident that with OTSL, our model achieves the same TED score and slightly 2r speed up in the inference runtime over HIML.</text> | ||
<table> | ||
<location><page_1><loc_23><loc_41><loc_78><loc_57></location> | ||
<caption>Table 1 HPO performed in OTSL HTML representation OIl the samle transformer-based TableFormer 19 architecture; trained only on PubTabNet [22] Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained OIl OTSL perform better, especially in recognizing complex table structures , and maintain a much higher mnAP score than the HTML counterpart_ and</caption> | ||
<row_0><col_0><body></col_0><col_1><body></col_1><col_2><col_header>Language</col_2><col_3><col_header>TEDs</col_3><col_4><col_header>TEDs</col_4><col_5><col_header>TEDs</col_5><col_6><col_header>mAP</col_6><col_7><col_header>Inference</col_7></row_0> | ||
<row_1><col_0><col_header>enc-layers</col_0><col_1><col_header>dec-layers</col_1><col_2><body></col_2><col_3><col_header>simple</col_3><col_4><col_header>complex</col_4><col_5><col_header>all</col_5><col_6><col_header>(0.75)</col_6><col_7><col_header>time (secs)</col_7></row_1> | ||
<row_2><col_0><body>6</col_0><col_1><body>6</col_1><col_2><body>OTSL HTML</col_2><col_3><body>0.965 0.969</col_3><col_4><body>0.934 0.927</col_4><col_5><body>0.955 0.955</col_5><col_6><body>0.88 0.857</col_6><col_7><body>2.73 5.39</col_7></row_2> | ||
<row_3><col_0><body></col_0><col_1><body></col_1><col_2><body>OTSL HTML</col_2><col_3><body>0.938 0.952</col_3><col_4><body>0.904 0.909</col_4><col_5><body>0.927</col_5><col_6><body>0.853</col_6><col_7><body>1.97</col_7></row_3> | ||
<row_4><col_0><body>2</col_0><col_1><body></col_1><col_2><body>OTSL</col_2><col_3><body>0.923</col_3><col_4><body>0.897 0.901</col_4><col_5><body>0.938 0.915</col_5><col_6><body>0.843</col_6><col_7><body>3.77</col_7></row_4> | ||
<row_5><col_0><body></col_0><col_1><body></col_1><col_2><body>HTML</col_2><col_3><body>0.945</col_3><col_4><body></col_4><col_5><body>0.931</col_5><col_6><body>0.859 0.834</col_6><col_7><body>1.91 3.81</col_7></row_5> | ||
<row_6><col_0><body></col_0><col_1><body>2</col_1><col_2><body>OTSL HTML</col_2><col_3><body>0.952 0.944</col_3><col_4><body>0.92 0.903</col_4><col_5><body>0.942 0.931</col_5><col_6><body>0.857 0.824</col_6><col_7><body>1.22 2</col_7></row_6> | ||
</table> | ||
<section_header_level_1><location><page_1><loc_22><loc_35><loc_43><loc_36></location>5.2 Quantitative Results</section_header_level_1> | ||
<text><location><page_1><loc_22><loc_22><loc_79><loc_34></location>We picked the model parameter configuration that produced the best prediction trained and evaluated it on three publicly available data sets: Pub TabNet (395k samples) , FinlabNet (1l3k samples) and PubTables-IM (about IM samples) Performance results are presented in Table.@] It is clearly evident that the model trained on OTSL outperforms HTML across the board; keeping high TEDs and mAP scores evell OIl difficult financial tables (FinTabNet) that contain sparse and tables. large</text> | ||
<text><location><page_1><loc_22><loc_16><loc_79><loc_22></location>advantage OvCI HTML when applied on & bigger data set like PubTables-IM and achieves significantly improved scores. Finally; OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation:</text> | ||
</document> |
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
order to compute the TED score. Inference timing results for all experiments were obtained from the same machine On single core with AMD EPYC 7763 CPU @2.45 GHz. | ||
|
||
## 5.1 Hyper Parameter Optimization | ||
|
||
We have chosen the PubIabNet data set to perform HPO, since it includes à highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans) Results are presented in Table. It is evident that with OTSL, our model achieves the same TED score and slightly 2r speed up in the inference runtime over HIML. | ||
|
||
Table 1 HPO performed in OTSL HTML representation OIl the samle transformer-based TableFormer 19 architecture; trained only on PubTabNet [22] Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained OIl OTSL perform better, especially in recognizing complex table structures , and maintain a much higher mnAP score than the HTML counterpart\_ and | ||
|
||
| | | Language | TEDs | TEDs | TEDs | mAP | Inference | | ||
|------------|------------|------------|-------------|-------------|-------------|-------------|-------------| | ||
| enc-layers | dec-layers | | simple | complex | all | (0.75) | time (secs) | | ||
| 6 | 6 | OTSL HTML | 0.965 0.969 | 0.934 0.927 | 0.955 0.955 | 0.88 0.857 | 2.73 5.39 | | ||
| | | OTSL HTML | 0.938 0.952 | 0.904 0.909 | 0.927 | 0.853 | 1.97 | | ||
| 2 | | OTSL | 0.923 | 0.897 0.901 | 0.938 0.915 | 0.843 | 3.77 | | ||
| | | HTML | 0.945 | | 0.931 | 0.859 0.834 | 1.91 3.81 | | ||
| | 2 | OTSL HTML | 0.952 0.944 | 0.92 0.903 | 0.942 0.931 | 0.857 0.824 | 1.22 2 | | ||
|
||
## 5.2 Quantitative Results | ||
|
||
We picked the model parameter configuration that produced the best prediction trained and evaluated it on three publicly available data sets: Pub TabNet (395k samples) , FinlabNet (1l3k samples) and PubTables-IM (about IM samples) Performance results are presented in Table.@] It is clearly evident that the model trained on OTSL outperforms HTML across the board; keeping high TEDs and mAP scores evell OIl difficult financial tables (FinTabNet) that contain sparse and tables. large | ||
|
||
advantage OvCI HTML when applied on & bigger data set like PubTables-IM and achieves significantly improved scores. Finally; OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation: |
1 change: 1 addition & 0 deletions
1
tests/data/groundtruth/docling_v2/2305.03393v1-pg9-img.pages.json
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
from io import BytesIO | ||
from pathlib import Path | ||
|
||
import pytest | ||
|
||
from docling.datamodel.base_models import DocumentStream | ||
from docling.document_converter import DocumentConverter | ||
|
||
from .verify_utils import verify_conversion_result_v2 | ||
|
||
GENERATE = False | ||
|
||
|
||
def get_doc_path(): | ||
|
||
pdf_path = Path("./tests/data/2305.03393v1-pg9-img.png") | ||
return pdf_path | ||
|
||
|
||
@pytest.fixture | ||
def converter(): | ||
|
||
converter = DocumentConverter() | ||
|
||
return converter | ||
|
||
|
||
def test_convert_path(converter: DocumentConverter): | ||
|
||
doc_path = get_doc_path() | ||
print(f"converting {doc_path}") | ||
|
||
doc_result = converter.convert(doc_path) | ||
verify_conversion_result_v2( | ||
input_path=doc_path, doc_result=doc_result, generate=GENERATE | ||
) | ||
|
||
|
||
def test_convert_stream(converter: DocumentConverter): | ||
|
||
doc_path = get_doc_path() | ||
print(f"converting {doc_path}") | ||
|
||
buf = BytesIO(doc_path.open("rb").read()) | ||
stream = DocumentStream(name=doc_path.name, stream=buf) | ||
|
||
doc_result = converter.convert(stream) | ||
verify_conversion_result_v2( | ||
input_path=doc_path, doc_result=doc_result, generate=GENERATE | ||
) |