pipeline.rst 5.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182
  1. Pipeline
  2. ==========
  3. Minimal Example
  4. ^^^^^^^^^^^^^^^^^
  5. .. code:: python
  6. import os
  7. from magic_pdf.data.data_reader_writer import FileBasedDataWriter, FileBasedDataReader
  8. from magic_pdf.data.dataset import PymuDocDataset
  9. from magic_pdf.model.doc_analyze_by_custom_model import doc_analyze
  10. # args
  11. pdf_file_name = "abc.pdf" # replace with the real pdf path
  12. name_without_suff = pdf_file_name.split(".")[0]
  13. # prepare env
  14. local_image_dir, local_md_dir = "output/images", "output"
  15. image_dir = str(os.path.basename(local_image_dir))
  16. os.makedirs(local_image_dir, exist_ok=True)
  17. image_writer, md_writer = FileBasedDataWriter(local_image_dir), FileBasedDataWriter(
  18. local_md_dir
  19. )
  20. # read bytes
  21. reader1 = FileBasedDataReader("")
  22. pdf_bytes = reader1.read(pdf_file_name) # read the pdf content
  23. # proc
  24. ## Create Dataset Instance
  25. ds = PymuDocDataset(pdf_bytes)
  26. ds.apply(doc_analyze, ocr=True).pipe_ocr_mode(image_writer).dump_md(md_writer, f"{name_without_suff}.md", image_dir)
  27. Running the above code will result in the following
  28. .. code:: bash
  29. output/
  30. ├── abc.md
  31. └── images
  32. Excluding the setup of the environment, such as creating directories and importing dependencies, the actual code snippet for converting pdf to markdown is as follows
  33. .. code:: python
  34. # read bytes
  35. reader1 = FileBasedDataReader("")
  36. pdf_bytes = reader1.read(pdf_file_name) # read the pdf content
  37. # proc
  38. ## Create Dataset Instance
  39. ds = PymuDocDataset(pdf_bytes)
  40. ds.apply(doc_analyze, ocr=True).pipe_ocr_mode(image_writer).dump_md(md_writer, f"{name_without_suff}.md", image_dir)
  41. ``ds.apply(doc_analyze, ocr=True)`` generates an ``InferenceResult`` object. The ``InferenceResult`` object, when executing the ``pipe_ocr_mode`` method, produces a ``PipeResult`` object.
  42. The ``PipeResult`` object, upon executing ``dump_md``, generates a ``markdown`` file at the specified location.
  43. The pipeline execution process is illustrated in the following diagram
  44. .. image:: ../../_static/image/pipeline.drawio.svg
  45. .. raw:: html
  46. <br> </br>
  47. Currently, the process is divided into three stages: data, inference, and processing, which correspond to the ``Dataset``, ``InferenceResult``, and ``PipeResult`` entities in the diagram.
  48. These stages are linked together through methods like ``apply``, ``doc_analyze``, or ``pipe_ocr_mode``
  49. .. admonition:: Tip
  50. :class: tip
  51. For more detailed information about ``Dataset``, ``InferenceResult``, and ``PipeResult``, please refer to :doc:`../../api/dataset`, :doc:`../../api/model_operators`, :doc:`../../api/pipe_operators`
  52. Pipeline Composition
  53. ^^^^^^^^^^^^^^^^^^^^^
  54. .. code:: python
  55. class Dataset(ABC):
  56. @abstractmethod
  57. def apply(self, proc: Callable, *args, **kwargs):
  58. """Apply callable method which.
  59. Args:
  60. proc (Callable): invoke proc as follows:
  61. proc(self, *args, **kwargs)
  62. Returns:
  63. Any: return the result generated by proc
  64. """
  65. pass
  66. class InferenceResult(InferenceResultBase):
  67. def apply(self, proc: Callable, *args, **kwargs):
  68. """Apply callable method which.
  69. Args:
  70. proc (Callable): invoke proc as follows:
  71. proc(inference_result, *args, **kwargs)
  72. Returns:
  73. Any: return the result generated by proc
  74. """
  75. return proc(copy.deepcopy(self._infer_res), *args, **kwargs)
  76. def pipe_ocr_mode(
  77. self,
  78. imageWriter: DataWriter,
  79. start_page_id=0,
  80. end_page_id=None,
  81. debug_mode=False,
  82. lang=None,
  83. ) -> PipeResult:
  84. pass
  85. class PipeResult:
  86. def apply(self, proc: Callable, *args, **kwargs):
  87. """Apply callable method which.
  88. Args:
  89. proc (Callable): invoke proc as follows:
  90. proc(pipeline_result, *args, **kwargs)
  91. Returns:
  92. Any: return the result generated by proc
  93. """
  94. return proc(copy.deepcopy(self._pipe_res), *args, **kwargs)
  95. The ``Dataset``, ``InferenceResult``, and ``PipeResult`` classes all have an ``apply`` method, which can be used to chain different stages of the computation.
  96. As shown below, ``MinerU`` provides a set of methods to compose these classes.
  97. .. code:: python
  98. # proc
  99. ## Create Dataset Instance
  100. ds = PymuDocDataset(pdf_bytes)
  101. ds.apply(doc_analyze, ocr=True).pipe_ocr_mode(image_writer).dump_md(md_writer, f"{name_without_suff}.md", image_dir)
  102. Users can implement their own functions for chaining as needed. For example, a user could use the ``apply`` method to create a function that counts the number of pages in a ``pdf`` file.
  103. .. code:: python
  104. from magic_pdf.data.data_reader_writer import FileBasedDataReader
  105. from magic_pdf.data.dataset import PymuDocDataset
  106. # args
  107. pdf_file_name = "abc.pdf" # replace with the real pdf path
  108. # read bytes
  109. reader1 = FileBasedDataReader("")
  110. pdf_bytes = reader1.read(pdf_file_name) # read the pdf content
  111. # proc
  112. ## Create Dataset Instance
  113. ds = PymuDocDataset(pdf_bytes)
  114. def count_page(ds)-> int:
  115. return len(ds)
  116. print("page number: ", ds.apply(count_page)) # will output the page count of `abc.pdf`