Stay organized with collections
Save and categorize content based on your preferences.
This page provides troubleshooting help and answers to frequently asked
questions about running tests with Firebase Test Lab. Known issues are also
documented. If you can't find what
you're looking for or need additional help, join the #test-lab
channel on
Firebase Slack or contact Firebase
support.
Troubleshooting
Why is my test taking so long to run?
When you select a device with a high capacity level in the Test Lab
catalog, tests may start faster. When a
device has low capacity, tests might take longer to run. If the number of
tests invoked is much larger than the capacity of the selected devices, tests
can take longer to finish.
Tests running on any level device capacity level may take longer due to the
following factors:
Traffic, which affects device availability and test speed.
Device or infrastructure failures, which can happen at any time. To check
if there is a reported infrastructure for Test Lab, see the
Firebase status dashboard.
To learn more about device capacity in Test Lab, see device capacity
information for Android and iOS.
Why am I receiving inconclusive test results?
Inconclusive test outcomes commonly occur either because of canceled test runs
or infrastructure errors.
Infrastructure errors are caused by internal Test Lab issues, like network
errors or unexpected device behaviors. Test Lab internally retires test runs
that produce infrastructure errors multiple times before reporting an
inconclusive outcome; however, you can disable these retries using
failFast.
To determine the cause of the error, follow these steps:
Retry the test in Test Lab to verify that it is reproducible.
Try running the test on a different device or device type, if applicable.
If the issue persists, contact the Test Lab team in the
#test-lab channel on
Firebase Slack.
Why did sharding make my tests run
longer?
Sharding can cause your tests to run longer when the number of shards you
specified exceeds the number of devices available for use in Test Lab. To
avoid this situation, try switching to a different device. For more information
about choosing a different device, see
Device Capacity.
Why is it taking a long time for my
test to start?
When you submit a test request, your app is first validated, re-signed, etc. in
preparation for running tests on a device. Normally, this process completes in
less than a few seconds, but it can be affected by factors like the size of your
app.
After your app is prepared, test executions are scheduled and remain in a queue
until a device is ready to run it. Until all test executions finish running,
the matrix status will be "Pending" (regardless of whether test executions are
in the queue or actively running).
Why is it taking a long time for my
test to finish?
After the test execution is finished, test artifacts are downloaded from the
device, processed, and uploaded to Cloud Storage. The duration of this step can
be affected by the amount and size of the artifacts.
Frequently asked questions
What are the no-cost quotas
for Test Lab? What should I do if I run out?
Firebase Test Lab offers no-cost quotas for testing on devices and for using
Cloud APIs. Note that the testing quota uses the standard Firebase pricing plan,
while the Cloud API quotas do not.
Testing quota
Testing quotas are determined by the number of devices used to run tests.
The Firebase Spark plan has a fixed testing quota at no cost to users. For
the Blaze plan, your quotas might increase if your usage of Google Cloud
increases over time. If you reach your testing quota, wait until the next
day or upgrade to the Blaze plan if you are currently on the Spark plan.
If you are already on the Blaze plan, you can request a quota increase.
For more information, see
Testing quota.
The Cloud Testing API comes with two quota limits: requests per day per
project, and requests per every 100 seconds per project. You can monitor your
usage in the
Google Cloud console.
Cloud Tool Results API quota
The Cloud Tool Results API comes with two quota limits: queries per day per
project, and queries per every 100 seconds per project. You can monitor your
usage in the
Google Cloud console.
Submit a request for higher quotas by
editing your quotas
directly in the Google Cloud console (note that most limits are set to
maximum by default), or
Request higher API quotas by filling out a request form in the
Google Cloud console or by contacting
Firebase support.
How do I find out if the
traffic reaching my backend is coming from Test Lab?
From your backend, you can determine if traffic is coming from Firebase-hosted
test devices by checking the source IP address against our
IP ranges.
Does Test Lab work with
VPC-SC?
Test Lab does not work with VPC-SC, which blocks the
copying of apps and other test artifacts between Test Lab's internal
storage and users' results buckets.
How do I detect flaky tests in
Test Lab?
To detect flaky behavior in your tests, we recommend using the
--num-flaky-test-attempts
option. Deflake reruns are billed or counted toward your daily quota the same as
normal test executions.
Keep the following in mind:
The entire test execution runs again when a failure is detected. There’s no
support for retrying only failed test cases.
Deflake retry runs are scheduled to run at the same time, but are not
guaranteed to run in parallel, for example, when traffic exceeds the number of
available devices.
Does Test Lab support
Appium, Flutter/FlutterDriver, ReactNative/Jest, or Cucumber?
While some of these items are on our roadmap, we're currently unable to provide
commitment to supporting these testing and app development platforms.
Where can I find device details,
like resolution, etc.?
Detailed device information is available through the API and can be accessed
from the gcloud client using the
describe command:
gcloud firebase test ios models describe MODEL
Can I use sharding with iOS tests?
Sharding isn't natively supported within Test Lab for iOS. However, you can
use the Flank client to shard iOS test cases.
This works by setting OnlyTestIdentifiers key and values in .xctestrun file.
See man page for xcodebuild.xctestrun for more details.
Why is my iOS test missing videos in the
results?
For iOS 18 or later, we are not able to support videos in the results.
Known issues
Sign-in Captchas
Robo test cannot bypass sign-in screens that require
additional user action beyond entering credentials to sign in, for example,
completing a CAPTCHA.
UI framework support
Robo test works best with apps that use UI elements from the Android UI
framework (including View, ViewGroup, and WebView
objects). If you use Robo test to exercise apps that use other UI
frameworks, including apps that use the Unity game engine, the test may exit
without exploring beyond the first screen.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-15 UTC."],[],[],null,["\u003cbr /\u003e\n\niOS Android \n\nThis page provides troubleshooting help and answers to frequently asked\nquestions about running tests with Firebase Test Lab. Known issues are also\ndocumented. If you can't find what\nyou're looking for or need additional help, join the [#test-lab\nchannel](https://firebase-community.slack.com/messages/test-lab) on\nFirebase Slack or contact [Firebase\nsupport](https://support.google.com/firebase/contact/support).\n\nTroubleshooting\n\n\u003cbr /\u003e\n\nWhy is my test taking so long to run?\n\n\u003cbr /\u003e\n\nWhen you select a device with a high capacity level in the Test Lab\ncatalog, tests may start faster. When a\ndevice has low capacity, tests might take longer to run. If the number of\ntests invoked is much larger than the capacity of the selected devices, tests\ncan take longer to finish.\n\n\nTests running on any level device capacity level may take longer due to the\nfollowing factors:\n\n- Traffic, which affects device availability and test speed.\n- Device or infrastructure failures, which can happen at any time. To check if there is a reported infrastructure for Test Lab, see the [Firebase status dashboard](https://status.firebase.google.com/summary).\n\n\nTo learn more about device capacity in Test Lab, see device capacity\ninformation for [Android](https://firebase.google.com/docs/test-lab/android/available-testing-devices#device-capacity) and [iOS](https://firebase.google.com/docs/test-lab/ios/available-testing-devices#device-capacity).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nWhy am I receiving inconclusive test results?\n\n\u003cbr /\u003e\n\nInconclusive test outcomes commonly occur either because of canceled test runs\nor infrastructure errors.\n\nInfrastructure errors are caused by internal Test Lab issues, like network\nerrors or unexpected device behaviors. Test Lab internally retires test runs\nthat produce infrastructure errors multiple times before reporting an\ninconclusive outcome; however, you can disable these retries using\n[failFast](/docs/test-lab/reference/testing/rest/v1/projects.testMatrices#TestMatrix.FIELDS.fail_fast).\n\nTo determine the cause of the error, follow these steps:\n\n1. Check for known outages in the [Firebase status dashboard](https://status.firebase.google.com/summary).\n2. Retry the test in Test Lab to verify that it is reproducible.\n\n | **Note:** Test Lab does not charge you for infrastructure errors.\n3. Try running the test on a different device or device type, if applicable.\n\nIf the issue persists, contact the Test Lab team in the\n[#test-lab channel](https://firebase-community.slack.com/messages/test-lab) on\nFirebase Slack.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nWhy did sharding make my tests run\nlonger?\n\n\u003cbr /\u003e\n\nSharding can cause your tests to run longer when the number of shards you\nspecified exceeds the number of devices available for use in Test Lab. To\navoid this situation, try switching to a different device. For more information\nabout choosing a different device, see\n\n[Device Capacity](https://firebase.google.com/docs/test-lab/ios/available-testing-devices#device_capacity).\n\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nWhy is it taking a long time for my\ntest to start?\n\n\u003cbr /\u003e\n\nWhen you submit a test request, your app is first validated, re-signed, etc. in\npreparation for running tests on a device. Normally, this process completes in\nless than a few seconds, but it can be affected by factors like the size of your\napp.\n\nAfter your app is prepared, test executions are scheduled and remain in a queue\nuntil a device is ready to run it. Until all test executions finish running,\nthe matrix status will be \"Pending\" (regardless of whether test executions are\nin the queue or actively running).\n| **Note:** The time your test spends waiting for an available device does not count toward your billing time.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nWhy is it taking a long time for my\ntest to finish?\n\n\u003cbr /\u003e\n\nAfter the test execution is finished, test artifacts are downloaded from the\ndevice, processed, and uploaded to Cloud Storage. The duration of this step can\nbe affected by the amount and size of the artifacts.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nFrequently asked questions\n\n\u003cbr /\u003e\n\nWhat are the no-cost quotas\nfor Test Lab? What should I do if I run out?\n\n\u003cbr /\u003e\n\nFirebase Test Lab offers no-cost quotas for testing on devices and for using\nCloud APIs. Note that the testing quota uses the standard Firebase pricing plan,\nwhile the Cloud API quotas do not.\n\n- **Testing quota**\n\n Testing quotas are determined by the number of devices used to run tests.\n The Firebase Spark plan has a fixed testing quota at no cost to users. For\n the Blaze plan, your quotas might increase if your usage of Google Cloud\n increases over time. If you reach your testing quota, wait until the next\n day or upgrade to the Blaze plan if you are currently on the Spark plan.\n If you are already on the Blaze plan, you can request a quota increase.\n For more information, see\n [Testing quota](/docs/test-lab/usage-quotas-pricing#testing-quota).\n\n You can monitor your testing quota usage in the [Google Cloud console](https://console.cloud.google.com/apis/api/testing.googleapis.com/quotas).\n- **Cloud Testing API quota**\n\n The Cloud Testing API comes with two quota limits: requests per day per\n project, and requests per every 100 seconds per project. You can monitor your\n usage in the\n [Google Cloud console](https://console.cloud.google.com/apis/api/testing.googleapis.com/quotas).\n- **Cloud Tool Results API quota**\n\n The Cloud Tool Results API comes with two quota limits: queries per day per\n project, and queries per every 100 seconds per project. You can monitor your\n usage in the\n [Google Cloud console](https://console.cloud.google.com/apis/api/toolresults.googleapis.com/quotas).\n\n Refer to [Cloud API quotas for Test Lab](/docs/test-lab/usage-quotas-pricing#cloud-api-quota)\n for more information on API limits. If you've reached an API quota:\n - Submit a request for higher quotas by\n [editing your quotas](https://cloud.google.com/docs/quota#requesting_higher_quota)\n directly in the Google Cloud console (note that most limits are set to\n maximum by default), or\n\n - Request higher API quotas by filling out a request form in the\n Google Cloud console or by contacting\n [Firebase support](https://support.google.com/firebase/contact/support).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nHow do I find out if the\ntraffic reaching my backend is coming from Test Lab?\n\n\u003cbr /\u003e\n\nFrom your backend, you can determine if traffic is coming from Firebase-hosted\ntest devices by checking the source IP address against our\n[IP ranges](https://firebase.google.com/docs/test-lab/android/get-started#ip-blocks).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nDoes Test Lab work with\nVPC-SC?\n\n\u003cbr /\u003e\n\nTest Lab does not work with VPC-SC, which blocks the\ncopying of apps and other test artifacts between Test Lab's internal\nstorage and users' results buckets.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nHow do I detect flaky tests in\nTest Lab?\n\n\u003cbr /\u003e\n\nTo detect flaky behavior in your tests, we recommend using the\n\n[--num-flaky-test-attempts](https://cloud.google.com/sdk/gcloud/reference/firebase/test/ios/run#--num-flaky-test-attempts)\n\noption. Deflake reruns are billed or counted toward your daily quota the same as\nnormal test executions.\n\nKeep the following in mind:\n\n- The entire test execution runs again when a failure is detected. There's no support for retrying only failed test cases.\n- Deflake retry runs are scheduled to run at the same time, but are not guaranteed to run in parallel, for example, when traffic exceeds the number of available devices.\n\n| **Note:** Infrastructure errors are independent from the deflake feature and don't trigger deflake reruns.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nDoes Test Lab support\nAppium, Flutter/FlutterDriver, ReactNative/Jest, or Cucumber?\n\n\u003cbr /\u003e\n\nWhile some of these items are on our roadmap, we're currently unable to provide\ncommitment to supporting these testing and app development platforms.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nWhere can I find device details,\nlike resolution, etc.?\n\n\u003cbr /\u003e\n\nDetailed device information is available through the API and can be accessed\nfrom the gcloud client using the\n[describe command](https://cloud.google.com/sdk/gcloud/reference/firebase/test/android/models/describe):\n\n`gcloud firebase test ios models describe `\u003cvar translate=\"no\"\u003eMODEL\u003c/var\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nCan I use sharding with iOS tests?\n\n\u003cbr /\u003e\n\nSharding isn't natively supported within Test Lab for iOS. However, you can\nuse the [Flank](https://flank.github.io/flank/) client to shard iOS test cases.\n| **Note:** Using Flank iOS sharding creates separate test matrices for each shard.\n\nThis works by setting `OnlyTestIdentifiers` key and values in `.xctestrun` file.\nSee `man` page for `xcodebuild.xctestrun` for more details.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nWhy is my iOS test missing videos in the\nresults?\n\n\u003cbr /\u003e\n\nFor iOS 18 or later, we are not able to support videos in the results.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nKnown issues\n\n\u003cbr /\u003e\n\nSign-in Captchas\n\n\u003cbr /\u003e\n\nRobo test cannot bypass sign-in screens that require\nadditional user action beyond entering credentials to sign in, for example,\ncompleting a CAPTCHA.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nUI framework support\n\n\u003cbr /\u003e\n\nRobo test works best with apps that use UI elements from the Android UI\nframework (including `View`, `ViewGroup`, and `WebView`\nobjects). If you use Robo test to exercise apps that use other UI\nframeworks, including apps that use the Unity game engine, the test may exit\nwithout exploring beyond the first screen.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e"]]