The Newark and Bangalore probes were deprecated in November 2024. They will be shut down on February 10th 2025.
Before the probes are shut down, we'll attempt to migrate any tests still running to a nearby public probe. Newark tests will be moved to North Virginia (unless a test is already using that location). Bangalore tests will be moved to Hyderabad (unless a test is already using that location).
The Dallas, San Francisco, Toronto probes were deprecated in November 2024. They will be shut down on February 17th 2025.
Before the probes are shut down, we'll attempt to migrate any tests still running to a nearby public probe. Dallas tests will be moved to Ohio (unless a test is already using that location). San Francisco tests will be moved to North California (unless a test is already using that location). Toronto tests will be moved to Montreal (unless a test is already using that location).
The Atlanta, Amsterdam, and New York probes were deprecated in November 2024. They will be shut down on February 24th 2025.
Before the probes are shut down, we'll attempt to migrate any tests still running to a nearby public probe. Atlanta tests will be moved to North Virginia (unless a test is already using that location). Amsterdam tests will be moved to Zurich (unless a test is already using that location). New York tests will be moved to North Virginia (unless a test is already using that location).
Completed -
The scheduled maintenance has been completed.
Jan 15, 08:00 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 15, 07:00 UTC
Scheduled -
We will be undergoing scheduled maintenance for our cloud integrations. During this 1-hour maintenance window, customers will not be able to install or manage their cloud integrations in the following regions:
- Azure US Central - Azure Netherlands
We appreciate your understanding as we work to enhance our services. If you have any questions or require assistance, please reach out to our support team.
Jan 7, 12:48 UTC
Resolved -
The incident is now resolved - we haven't observed any occurrence nor manifestation since yesterday's update.
Jan 14, 08:38 UTC
Monitoring -
The engineering team has deployed a fix and is monitoring its effectiveness.
Jan 13, 13:19 UTC
Update -
We are continuing to investigate this issue.
Jan 10, 17:30 UTC
Update -
We are continuing to investigate this issue.
Jan 10, 16:51 UTC
Investigating -
We're experiencing intermittent delays in Grafana start-up time in the AWS US East region. Our engineers are actively working on a solution. We will provide updates accordingly.
Dec 1, 00:00 UTC
Completed -
The scheduled maintenance has been completed.
Jan 13, 12:00 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 13, 11:00 UTC
Scheduled -
We will be undergoing scheduled migration for our cloud logs instances. There should be no impact on the services during this time.
We appreciate your understanding as we work to enhance our services. If you have any questions or require assistance, please reach out to our support team.
Jan 13, 09:42 UTC
Resolved -
This incident has been resolved.
Jan 10, 23:46 UTC
Monitoring -
Starting at approximately 10:40 UTC today, we began receiving reports of degraded performance for the Confluent Cloud integration in Grafana Cloud. Users may experience gaps in metric data or encounter 500 errors when querying Confluent. We've identified the issue with the Confluent Cloud API, who have launched their own status page to track the ongoing concern: https://status.confluent.cloud/incidents/1kywc5qb2hv2
We will continue to monitor the situation and apply remediation if necessary.
Jan 10, 18:17 UTC
Resolved -
Engineering has released a fix and as of 9:10am UTC, customers should no longer experience an issue with AzureAD login. At this time, we are considering this issue resolved.
Jan 10, 09:11 UTC
Investigating -
As of 9:20pm UTC on Jan 9, we were alerted to an issue with logins to environments failing on AzureAD. Users with instances on the "fast" rolling release channel may see "Failed to get token from provider" when attempting to log in.
Engineering is actively engaged and assessing the issue. We will provide updates accordingly.
Jan 10, 08:51 UTC
Resolved -
Loki experienced a read outage from ~21:40-22:00 UTC today. We identified the issue, applied remediation, and continue to monitor for any further disruptions. At this time, we are considering this issue resolved.
Jan 9, 22:00 UTC
Resolved -
The connections have been stable since 14:50 UTC and all 4 probes operating normally during that time. Transitioning this incident from monitoring to resolved.
Jan 9, 01:34 UTC
Monitoring -
Between 11:00 and 14:50 UTC Grafana Cloud regions in Singapore ( prod-ap-southeast-1 ) and Indonesia ( prod-ap-southeast-2 ) experienced issues connecting to synthetic monitoring public probes hosted in Linode: Atlanta, Toronto, Dallas, Newark.
Synthetic monitor changes were delayed propagating to Atlanta, Toronto, Dallas, Newark by up to 30 minutes. This affected customers in our prod-ap-southeast-1 (Singapore) and prod-ap-southeast-2 (Indonesia) regions. As of 14:50 UTC these regions have maintained continuous connection to Linode probes. We continue to monitor these regions.
Jan 8, 15:51 UTC