You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 6, 2026. It is now read-only.
perf: cache first page of jobs.getQueryResults rows #374 -- Update QueryJob and RowIterator to cache the first page of results, which we fetch as part of the logic to wait for the job to finish. Discard the cache if maxResults or startIndex are set.
Update DB-API to not call BQ Storage API if cached results are the only page.
Update Client.query to call jobs.query backend API method for acceptable job_configs.
(optional?) Avoid call to jobs.get in certain cases, such as QueryJob.to_dataframe and QueryJob.to_arrow
Add "reload" argument to QueryJob.result() -- default to True.
Update RowIterator to call get_job to fetch the destination table ID before attempting use of BQ Storage API (if destination table ID isn't available).
This issue tracks the "fast query path" changes for the Python client(s):
jobs.getQueryResultsto download result sets #363 -- UpdateQueryJobto usegetQueryResultsinRowIterator. Project down to avoid fetching schema and other unnecessary job stats inRowIterator.jobs.getQueryResultsrows #374 -- UpdateQueryJobandRowIteratorto cache the first page of results, which we fetch as part of the logic to wait for the job to finish. Discard the cache ifmaxResultsorstartIndexare set.getQueryResultsfrom DB-API #375 -- Update DB-API to avoid direct call to list_rows()to_dataframeif all rows are cached #384 -- Updateto_dataframeand related methods in RowIterator to not call BQ Storage API if cached results are the only page.job_configs.QueryJob.to_dataframeandQueryJob.to_arrow