Question Answering
Question Answering predictive task.
            FewshotExample
    
              Bases: FewshotExample
Few-shot example with questions and answers for a context.
Source code in sieves/tasks/predictive/question_answering/core.py
                | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |  | 
            from_dspy(example)
  
      classmethod
  
    Convert from dspy.Example.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| example | Example | Example as  | required | 
Returns:
| Type | Description | 
|---|---|
| Self | Example as  | 
Source code in sieves/tasks/predictive/core.py
              | 76 77 78 79 80 81 82 83 |  | 
            to_dspy()
    Convert to dspy.Example.
Returns:
| Type | Description | 
|---|---|
| Example | Example as  | 
Source code in sieves/tasks/predictive/core.py
              | 69 70 71 72 73 74 |  | 
            QuestionAnswering
    
              Bases: PredictiveTask[_TaskPromptSignature, _TaskResult, _TaskBridge]
Answer questions about a text using structured engines.
Source code in sieves/tasks/predictive/question_answering/core.py
                | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 |  | 
            fewshot_examples
  
      property
  
    Return few-shot examples.
Returns:
| Type | Description | 
|---|---|
| Sequence[FewshotExample] | Few-shot examples. | 
            id
  
      property
  
    Return task ID.
Used by pipeline for results and dependency management.
Returns:
| Type | Description | 
|---|---|
| str | Task ID. | 
            prompt_signature_description
  
      property
  
    Return prompt signature description.
Returns:
| Type | Description | 
|---|---|
| str | None | Prompt signature description. | 
            prompt_template
  
      property
  
    Return prompt template.
Returns:
| Type | Description | 
|---|---|
| str | Prompt template. | 
            __add__(other)
    Chain this task with another task or pipeline using the + operator.
This returns a new Pipeline that executes this task first, followed by the
task(s) in other. The original task(s)/pipeline are not mutated.
Cache semantics:
- If other is a Pipeline, the resulting pipeline adopts other's
  use_cache setting (because the left-hand side is a single task).
- If other is a Task, the resulting pipeline defaults to use_cache=True.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| other | Task | Pipeline | A  | required | 
Returns:
| Type | Description | 
|---|---|
| Pipeline | A new  | 
Raises:
| Type | Description | 
|---|---|
| TypeError | If  | 
Source code in sieves/tasks/core.py
              | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |  | 
            __call__(docs)
    Execute the task on a set of documents.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| docs | Iterable[Doc] | Documents to process. | required | 
Returns:
| Type | Description | 
|---|---|
| Iterable[Doc] | Processed documents. | 
Source code in sieves/tasks/predictive/core.py
              | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 |  | 
            __init__(questions, model, task_id=None, include_meta=True, batch_size=-1, prompt_instructions=None, fewshot_examples=(), generation_settings=GenerationSettings())
    Initialize QuestionAnswering task.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| questions | list[str] | Questions to answer. | required | 
| model | _TaskModel | Model to use. | required | 
| task_id | str | None | Task ID. | None | 
| include_meta | bool | Whether to include meta information generated by the task. | True | 
| batch_size | int | Batch size to use for inference. Use -1 to process all documents at once. | -1 | 
| prompt_instructions | str | None | Custom prompt instructions. If None, default instructions are used. | None | 
| fewshot_examples | Sequence[FewshotExample] | Few-shot examples. | () | 
| generation_settings | GenerationSettings | Settings for structured generation. | GenerationSettings() | 
Source code in sieves/tasks/predictive/question_answering/core.py
              | 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |  | 
            deserialize(config, **kwargs)
  
      classmethod
  
    Generate PredictiveTask instance from config.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| config | Config | Config to generate instance from. | required | 
| kwargs | dict[str, Any] | Values to inject into loaded config. | {} | 
Returns:
| Type | Description | 
|---|---|
| PredictiveTask[TaskPromptSignature, TaskResult, TaskBridge] | Deserialized PredictiveTask instance. | 
Source code in sieves/tasks/predictive/core.py
              | 237 238 239 240 241 242 243 244 245 246 247 248 249 250 |  | 
            optimize(optimizer, verbose=True)
    Optimize task prompt and few-shot examples with the available optimization config.
Updates task to use best prompt and few-shot examples found by the optimizer.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| optimizer | Optimizer | Optimizer to run. | required | 
| verbose | bool | Whether to suppress output. DSPy produces a good amount of logs, so this can be useful to not pollute your terminal. Only warnings and errors will be printed. | True | 
Returns:
| Type | Description | 
|---|---|
| tuple[str, Sequence[FewshotExample]] | Best found prompt and few-shot examples. | 
Source code in sieves/tasks/predictive/core.py
              | 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 |  | 
            serialize()
    Serialize task.
Returns:
| Type | Description | 
|---|---|
| Config | Config instance. | 
Source code in sieves/tasks/core.py
              | 88 89 90 91 92 93 |  | 
Bridges for question answering task.
            DSPyQA
    
              Bases: QABridge[PromptSignature, Result, InferenceMode]
DSPy bridge for question answering.
Source code in sieves/tasks/predictive/question_answering/bridges.py
                | 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |  | 
            prompt_template
  
      property
  
    Return prompt template.
Chains _prompt_instructions, _prompt_example_template and _prompt_conclusion.
Note: different engines have different expectations as how a prompt should look like. E.g. outlines supports the Jinja 2 templating format for insertion of values and few-shot examples, whereas DSPy integrates these things in a different value in the workflow and hence expects the prompt not to include these things. Mind engine-specific expectations when creating a prompt template.
Returns:
| Type | Description | 
|---|---|
| str | Prompt template as string. None if not used by engine. | 
            __init__(task_id, prompt_instructions, questions)
    Initialize QuestionAnsweringBridge.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| task_id | str | Task ID. | required | 
| prompt_instructions | str | None | Custom prompt instructions. If None, default instructions are used. | required | 
| questions | list[str] | Questions to answer. | required | 
Source code in sieves/tasks/predictive/question_answering/bridges.py
              | 23 24 25 26 27 28 29 30 31 32 33 34 35 |  | 
            LangChainQA
    
              Bases: PydanticBasedQA[InferenceMode]
LangChain bridge for question answering.
Source code in sieves/tasks/predictive/question_answering/bridges.py
                | 218 219 220 221 222 223 224 |  | 
            prompt_template
  
      property
  
    Return prompt template.
Chains _prompt_instructions, _prompt_example_template and _prompt_conclusion.
Note: different engines have different expectations as how a prompt should look like. E.g. outlines supports the Jinja 2 templating format for insertion of values and few-shot examples, whereas DSPy integrates these things in a different value in the workflow and hence expects the prompt not to include these things. Mind engine-specific expectations when creating a prompt template.
Returns:
| Type | Description | 
|---|---|
| str | Prompt template as string. None if not used by engine. | 
            __init__(task_id, prompt_instructions, questions)
    Initialize QuestionAnsweringBridge.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| task_id | str | Task ID. | required | 
| prompt_instructions | str | None | Custom prompt instructions. If None, default instructions are used. | required | 
| questions | list[str] | Questions to answer. | required | 
Source code in sieves/tasks/predictive/question_answering/bridges.py
              | 23 24 25 26 27 28 29 30 31 32 33 34 35 |  | 
            OutlinesQA
    
              Bases: PydanticBasedQA[InferenceMode]
Outlines bridge for question answering.
Source code in sieves/tasks/predictive/question_answering/bridges.py
                | 209 210 211 212 213 214 215 |  | 
            prompt_template
  
      property
  
    Return prompt template.
Chains _prompt_instructions, _prompt_example_template and _prompt_conclusion.
Note: different engines have different expectations as how a prompt should look like. E.g. outlines supports the Jinja 2 templating format for insertion of values and few-shot examples, whereas DSPy integrates these things in a different value in the workflow and hence expects the prompt not to include these things. Mind engine-specific expectations when creating a prompt template.
Returns:
| Type | Description | 
|---|---|
| str | Prompt template as string. None if not used by engine. | 
            __init__(task_id, prompt_instructions, questions)
    Initialize QuestionAnsweringBridge.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| task_id | str | Task ID. | required | 
| prompt_instructions | str | None | Custom prompt instructions. If None, default instructions are used. | required | 
| questions | list[str] | Questions to answer. | required | 
Source code in sieves/tasks/predictive/question_answering/bridges.py
              | 23 24 25 26 27 28 29 30 31 32 33 34 35 |  | 
            PydanticBasedQA
    
              Bases: QABridge[BaseModel, BaseModel, EngineInferenceMode], ABC
Base class for Pydantic-based question answering bridges.
Source code in sieves/tasks/predictive/question_answering/bridges.py
                | 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |  | 
            inference_mode
  
      abstractmethod
      property
  
    Return inference mode.
Returns:
| Type | Description | 
|---|---|
| EngineInferenceMode | Inference mode. | 
            prompt_template
  
      property
  
    Return prompt template.
Chains _prompt_instructions, _prompt_example_template and _prompt_conclusion.
Note: different engines have different expectations as how a prompt should look like. E.g. outlines supports the Jinja 2 templating format for insertion of values and few-shot examples, whereas DSPy integrates these things in a different value in the workflow and hence expects the prompt not to include these things. Mind engine-specific expectations when creating a prompt template.
Returns:
| Type | Description | 
|---|---|
| str | Prompt template as string. None if not used by engine. | 
            __init__(task_id, prompt_instructions, questions)
    Initialize QuestionAnsweringBridge.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| task_id | str | Task ID. | required | 
| prompt_instructions | str | None | Custom prompt instructions. If None, default instructions are used. | required | 
| questions | list[str] | Questions to answer. | required | 
Source code in sieves/tasks/predictive/question_answering/bridges.py
              | 23 24 25 26 27 28 29 30 31 32 33 34 35 |  | 
            QABridge
    
              Bases: Bridge[_BridgePromptSignature, _BridgeResult, EngineInferenceMode], ABC
Abstract base class for question answering bridges.
Source code in sieves/tasks/predictive/question_answering/bridges.py
                | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |  | 
            inference_mode
  
      abstractmethod
      property
  
    Return inference mode.
Returns:
| Type | Description | 
|---|---|
| EngineInferenceMode | Inference mode. | 
            prompt_signature
  
      abstractmethod
      property
  
    Create output signature.
E.g.: Signature in DSPy, Pydantic objects in outlines, JSON schema in jsonformers.
This is engine-specific.
Returns:
| Type | Description | 
|---|---|
| type[TaskPromptSignature] | TaskPromptSignature | Output signature object. This can be an instance (e.g. a regex string) or a class (e.g. a Pydantic class). | 
            prompt_template
  
      property
  
    Return prompt template.
Chains _prompt_instructions, _prompt_example_template and _prompt_conclusion.
Note: different engines have different expectations as how a prompt should look like. E.g. outlines supports the Jinja 2 templating format for insertion of values and few-shot examples, whereas DSPy integrates these things in a different value in the workflow and hence expects the prompt not to include these things. Mind engine-specific expectations when creating a prompt template.
Returns:
| Type | Description | 
|---|---|
| str | Prompt template as string. None if not used by engine. | 
            __init__(task_id, prompt_instructions, questions)
    Initialize QuestionAnsweringBridge.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| task_id | str | Task ID. | required | 
| prompt_instructions | str | None | Custom prompt instructions. If None, default instructions are used. | required | 
| questions | list[str] | Questions to answer. | required | 
Source code in sieves/tasks/predictive/question_answering/bridges.py
              | 23 24 25 26 27 28 29 30 31 32 33 34 35 |  | 
            consolidate(results, docs_offsets)
  
      abstractmethod
  
    Consolidate results for document chunks into document results.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| results | Iterable[TaskResult] | Results per document chunk. | required | 
| docs_offsets | list[tuple[int, int]] | Chunk offsets per document. Chunks per document can be obtained with  | required | 
Returns:
| Type | Description | 
|---|---|
| Iterable[TaskResult] | Results per document. | 
Source code in sieves/tasks/predictive/bridges.py
              | 127 128 129 130 131 132 133 134 135 |  | 
            integrate(results, docs)
  
      abstractmethod
  
    Integrate results into Doc instances.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| results | Iterable[TaskResult] | Results from prompt executable. | required | 
| docs | Iterable[Doc] | Doc instances to update. | required | 
Returns:
| Type | Description | 
|---|---|
| Iterable[Doc] | Updated doc instances. | 
Source code in sieves/tasks/predictive/bridges.py
              | 118 119 120 121 122 123 124 125 |  |