Skip to content

Commit

Permalink
fix dockerfile not building due to PEP517 error (#106)
Browse files Browse the repository at this point in the history
  • Loading branch information
matt-bernstein authored May 8, 2024
1 parent 8ae74c0 commit 0e0bb90
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 5 deletions.
8 changes: 8 additions & 0 deletions Dockerfile.app
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ RUN apt-get update && apt-get install -y git gcc
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

RUN pip install --upgrade pip

# Install Poetry
RUN pip install poetry==1.8.1

Expand All @@ -17,6 +19,12 @@ WORKDIR /usr/src/app

COPY pyproject.toml poetry.lock ./

# fix error:
# KeyError: 'PEP517_BUILD_BACKEND'
# Note: This error originates from the build backend, and is likely not a problem with poetry but with paginate (0.5.6) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "paginate (==0.5.6)"'.
RUN poetry run pip install --upgrade pip setuptools wheel
RUN poetry run python -m pip install paginate==0.5.6 --no-cache-dir --no-use-pep517

# Install dependencies
RUN poetry config virtualenvs.create false \
&& poetry install --no-interaction --no-ansi --no-root --with label-studio
Expand Down
10 changes: 5 additions & 5 deletions adala/runtimes/_openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -367,10 +367,10 @@ async def batch_to_batch(
# check for errors - if any, append to outputs and continue
if response.get("error"):
# FIXME if we collect failed and succeeded outputs in the same list -> df, we end up with an awkward schema like this:
# output error message details
# ---------------------------
# output1 nan nan nan
# nan true message2 details2
# output error message details
# ---------------------------
# output1 nan nan nan
# nan true message2 details2
# we are not going to send the error response to lse
# outputs.append(response)
if self.verbose:
Expand All @@ -392,7 +392,7 @@ async def batch_to_batch(
# TODO: note that this doesn't work for multiple output fields e.g. `Output {output1} and Output {output2}`
output_df = InternalDataFrame(outputs)
# return output dataframe indexed as input batch.index, assuming outputs are in the same order as inputs
return output_df.set_index('index')
return output_df.set_index("index")

async def record_to_record(
self,
Expand Down

0 comments on commit 0e0bb90

Please sign in to comment.