pipeline.rst 5.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185
  1. Pipeline
  2. ==========
  3. Minimal Example
  4. ^^^^^^^^^^^^^^^^^
  5. .. code:: python
  6. import os
  7. from magic_pdf.data.data_reader_writer import FileBasedDataWriter, FileBasedDataReader
  8. from magic_pdf.data.dataset import PymuDocDataset
  9. from magic_pdf.model.doc_analyze_by_custom_model import doc_analyze
  10. # args
  11. pdf_file_name = "abc.pdf" # replace with the real pdf path
  12. name_without_suff = pdf_file_name.split(".")[0]
  13. # prepare env
  14. local_image_dir, local_md_dir = "output/images", "output"
  15. image_dir = str(os.path.basename(local_image_dir))
  16. os.makedirs(local_image_dir, exist_ok=True)
  17. image_writer, md_writer = FileBasedDataWriter(local_image_dir), FileBasedDataWriter(
  18. local_md_dir
  19. )
  20. image_dir = str(os.path.basename(local_image_dir))
  21. # read bytes
  22. reader1 = FileBasedDataReader("")
  23. pdf_bytes = reader1.read(pdf_file_name) # read the pdf content
  24. # proc
  25. ## Create Dataset Instance
  26. ds = PymuDocDataset(pdf_bytes)
  27. ds.apply(doc_analyze, ocr=True).pipe_ocr_mode(image_writer).dump_md(md_writer, f"{name_without_suff}.md", image_dir)
  28. Running the above code will result in the following
  29. .. code:: bash
  30. output/
  31. ├── abc.md
  32. └── images
  33. Excluding the setup of the environment, such as creating directories and importing dependencies, the actual code snippet for converting pdf to markdown is as follows
  34. .. code:: python
  35. # read bytes
  36. reader1 = FileBasedDataReader("")
  37. pdf_bytes = reader1.read(pdf_file_name) # read the pdf content
  38. # proc
  39. ## Create Dataset Instance
  40. ds = PymuDocDataset(pdf_bytes)
  41. ds.apply(doc_analyze, ocr=True).pipe_ocr_mode(image_writer).dump_md(md_writer, f"{name_without_suff}.md", image_dir)
  42. ``ds.apply(doc_analyze, ocr=True)`` generates an ``InferenceResult`` object. The ``InferenceResult`` object, when executing the ``pipe_ocr_mode`` method, produces a ``PipeResult`` object.
  43. The ``PipeResult`` object, upon executing ``dump_md``, generates a ``markdown`` file at the specified location.
  44. The pipeline execution process is illustrated in the following diagram
  45. .. image:: ../../_static/image/pipeline.drawio.svg
  46. .. raw:: html
  47. <br> </br>
  48. Currently, the process is divided into three stages: data, inference, and processing, which correspond to the ``Dataset``, ``InferenceResult``, and ``PipeResult`` entities in the diagram.
  49. These stages are linked together through methods like ``apply``, ``doc_analyze``, or ``pipe_ocr_mode``
  50. .. admonition:: Tip
  51. :class: tip
  52. For more examples on how to use ``Dataset``, ``InferenceResult``, and ``PipeResult``, please refer to :doc:`../quick_start/to_markdown`
  53. For more detailed information about ``Dataset``, ``InferenceResult``, and ``PipeResult``, please refer to :doc:`../../api/dataset`, :doc:`../../api/model_operators`, :doc:`../../api/pipe_operators`
  54. Pipeline Composition
  55. ^^^^^^^^^^^^^^^^^^^^^
  56. .. code:: python
  57. class Dataset(ABC):
  58. @abstractmethod
  59. def apply(self, proc: Callable, *args, **kwargs):
  60. """Apply callable method which.
  61. Args:
  62. proc (Callable): invoke proc as follows:
  63. proc(self, *args, **kwargs)
  64. Returns:
  65. Any: return the result generated by proc
  66. """
  67. pass
  68. class InferenceResult(InferenceResultBase):
  69. def apply(self, proc: Callable, *args, **kwargs):
  70. """Apply callable method which.
  71. Args:
  72. proc (Callable): invoke proc as follows:
  73. proc(inference_result, *args, **kwargs)
  74. Returns:
  75. Any: return the result generated by proc
  76. """
  77. return proc(copy.deepcopy(self._infer_res), *args, **kwargs)
  78. def pipe_ocr_mode(
  79. self,
  80. imageWriter: DataWriter,
  81. start_page_id=0,
  82. end_page_id=None,
  83. debug_mode=False,
  84. lang=None,
  85. ) -> PipeResult:
  86. pass
  87. class PipeResult:
  88. def apply(self, proc: Callable, *args, **kwargs):
  89. """Apply callable method which.
  90. Args:
  91. proc (Callable): invoke proc as follows:
  92. proc(pipeline_result, *args, **kwargs)
  93. Returns:
  94. Any: return the result generated by proc
  95. """
  96. return proc(copy.deepcopy(self._pipe_res), *args, **kwargs)
  97. The ``Dataset``, ``InferenceResult``, and ``PipeResult`` classes all have an ``apply`` method, which can be used to chain different stages of the computation.
  98. As shown below, ``MinerU`` provides a set of methods to compose these classes.
  99. .. code:: python
  100. # proc
  101. ## Create Dataset Instance
  102. ds = PymuDocDataset(pdf_bytes)
  103. ds.apply(doc_analyze, ocr=True).pipe_ocr_mode(image_writer).dump_md(md_writer, f"{name_without_suff}.md", image_dir)
  104. Users can implement their own functions for chaining as needed. For example, a user could use the ``apply`` method to create a function that counts the number of pages in a ``pdf`` file.
  105. .. code:: python
  106. from magic_pdf.data.data_reader_writer import FileBasedDataReader
  107. from magic_pdf.data.dataset import PymuDocDataset
  108. # args
  109. pdf_file_name = "abc.pdf" # replace with the real pdf path
  110. # read bytes
  111. reader1 = FileBasedDataReader("")
  112. pdf_bytes = reader1.read(pdf_file_name) # read the pdf content
  113. # proc
  114. ## Create Dataset Instance
  115. ds = PymuDocDataset(pdf_bytes)
  116. def count_page(ds)-> int:
  117. return len(ds)
  118. print("page number: ", ds.apply(count_page)) # will output the page count of `abc.pdf`