OutSystems Developer Cloud ·
| Date | Sev | Type | SLO / Title | Stream | Service | Status | MTTD | Mitig (h) | Resol (h) | Sys-Wide | Note |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 03-23 | SEV4 | Manually Created | in_triage | 0.0 | — | — | — | **Firing** Value: alert=1, highNarrow=0, highNarrowQuery=1, highWide=0, highWideQuery=1, lowNarrow=1, lowNarrowQuery=1, lowWide=1, lowWideQuery=1 Labels: - alertname = KEDA processing Latency - Error Budget Burn Rate is High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - alerttype = ice-keda-SLO-latency - cluster = rundev-osall-eu-west-1-01 - du = catalog-stack - grafana_folder = ICE - grafana_slo_severity = warning - grafana_slo_uuid = zro0c5ht0hh2jbc0y6yg0 - notificationtool = pagerduty - service = keda - service_name = keda - severity = warning - team = ICE - team_name = ICE Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-zro0c5ht0hh2jbc0y6yg0?orgId=1&var-cluster=rundev-osall-eu-west-1-01 - name = SLO Burn Rate High - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDCCPC/pages/5564662104/ICE+KEDA+Latency+runbook - slo_name = KEDA processing Latency - summary = KEDA scaling latency SLO Burn Rate High Source: https://outsystems.grafana.net/alerting/grafana/cf7ih3o8xcwe9c/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcf7ih3o8xcwe9c&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dice-keda-SLO-latency&matcher=cluster%3Drundev-osall-eu-west-1-01&matcher=du%3Dcatalog-stack&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dzro0c5ht0hh2jbc0y6yg0&matcher=notificationtool%3Dpagerduty&matcher=service%3Dkeda&matcher=service_name%3Dkeda&matcher=severity%3Dwarning&matcher=team%3DICE&matcher=team_name%3DICE&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-zro0c5ht0hh2jbc0y6yg0?from=1773942560000&orgId=1&to=1774312360367 Panel: https://outsystems.grafana.net/d/grafana_slo_app-zro0c5ht0hh2jbc0y6yg0?from=1773942560000&orgId=1&to=1774312360367&viewPanel=1 | |||
| 03-23 | SEV4 | Manually Created | in_triage | 0.0 | — | — | — | **Firing** Value: A=6.62807236541495, Aux_SuccessRate=6.62807236541495, IsFiring=1 Labels: - alertname = GA - Preview - Latency degradation (Ring +3 US) - alertingtool = pagerduty - grafana_folder = Consoles - sendResolve = false - severity = warning - team = ALMConsoles Annotations: - AlertTitle = GA - Preview - Latency degradation (Ring +3 US) - Dashboard = https://outsystems.grafana.net/d/dea7f92c-d866-456f-b723-b67d6ff960c7/previewindevices-console-latency?orgId=1&from=now-1h&var-tenantEnvironmentNames=ring-3-us01 - SlackNotificationTitle = Preview Latency Alert Ring+3 US - SuccessRate = 6.62807236541495 - description = Console requests are taking longer than 5 seconds for the last 30 minutes (GA - Ring +3 US - Preview) - runbook = https://outsystemsrd.atlassian.net/wiki/spaces/RKB/pages/3697836943/Runbook+Preview+in+devices+console+-+Latency - summary = Console requests are taking longer than 5 seconds for the last 30 minutes (GA - Ring +3 US - Preview) Source: https://outsystems.grafana.net/alerting/grafana/cexuzqv3zuk1sb/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcexuzqv3zuk1sb&matcher=alertingtool%3Dpagerduty&matcher=sendResolve%3Dfalse&matcher=severity%3Dwarning&matcher=team%3DALMConsoles&orgId=1 | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=6501, C=1 Labels: - alertname = NATS JS max Consumers - alertingtool = pagerduty - alerttype = NATS-JS-max-consumers - cluster = rundev-ea-eu-ce-1-01 - dynTitle = NATS JS stream in ring EA stamp rundev-ea-eu-ce-1-01 namespace nats-runtime-events has replicas out of sync for longer than 5m. - environment = ea - grafana_folder = Infrastructure & Cloud Engineering (ICE) - namespace = nats-runtime-events - region = eu-central-1 - ring = ea - severity = warning - team = ice Annotations: - summary = Number of consumers from NATS JS on stamp rundev-ea-eu-ce-1-01 namespace nats-runtime-events is higher than 6000. Source: https://outsystems.grafana.net/alerting/grafana/beebepllktszkd/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbeebepllktszkd&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3DNATS-JS-max-consumers&matcher=cluster%3Drundev-ea-eu-ce-1-01&matcher=dynTitle%3DNATS+JS+stream+in+ring+EA+stamp+rundev-ea-eu-ce-1-01+namespace+nats-runtime-events+has+replicas+out+of+sync+for+longer+than+5m.&matcher=environment%3Dea&matcher=namespace%3Dnats-runtime-events&matcher=region%3Deu-central-1&matcher=ring%3Dea&matcher=severity%3Dwarning&matcher=team%3Dice&orgId=1 Dashboard: https://outsystems.grafana.net/d/cdmhfj8mu1mv4d?from=1774305830000&orgId=1&to=1774309493577 Panel: https://outsystems.grafana.net/d/cdmhfj8mu1mv4d?from=1774305830000&orgId=1&to=1774309493577&viewPanel=29 | |||
| 03-23 | SEV2 | System-wide SLO | closed | 0.0 | 2.09 | 2.21 | Yes | SLO Name: ga Database Scripts Execution - SucessRate - il-central-1 (ga-database-scripts-execution-successrate-il-central-1) SLO Service Name: Sys-Wide - GA - 1CP Composite (sys-wide-ga-1cp-composite) Alert Conditions: Average burn rate ≥ 30x and this condition lasts for 5 minutes; Average burn rate ≥ 10x and this condition lasts for 30 minutes; Average burn rate ≥ 5x and this condition lasts for 1 hour Ring:ga Region: il-central-1 Stamps: | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=0.2222222222222222, highWide=1, highWideQuery=0.05263276774672854, lowNarrow=1, lowNarrowQuery=0.0625, lowWide=1, lowWideQuery=0.025316888313832053 Labels: - alertname = [SSC] Compositions CSA latency - Error Budget Burn Rate is High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-us-east-2-04 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = nyuv5xfxoikxajk0zz418 - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?orgId=1&var-environment=osall&var-cluster=rundev-osall-us-east-2-04 - name = SLO Burn Rate High - runbook_url = - slo_name = [SSC] Compositions CSA latency - summary = Compositions CSA latency SLO Burn Rate High Source: https://outsystems.grafana.net/alerting/grafana/dfa56x9bn7pj5b/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Ddfa56x9bn7pj5b&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-us-east-2-04&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dnyuv5xfxoikxajk0zz418&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774291190000&orgId=1&to=1774294855991 Panel: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774291190000&orgId=1&to=1774294855991&viewPanel=1 | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=0.5, highWide=1, highWideQuery=0.25, lowNarrow=1, lowNarrowQuery=0.33333333333333337, lowWide=1, lowWideQuery=0.0625 Labels: - alertname = [SSC] Compositions CSA latency - Error Budget Burn Rate is Very High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-us-east-2-04 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = nyuv5xfxoikxajk0zz418 - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?orgId=1&var-environment=osall&var-cluster=rundev-osall-us-east-2-04 - name = SLO Burn Rate Very High - runbook_url = - slo_name = [SSC] Compositions CSA latency - summary = Compositions CSA latency SLO Burn Rate Very High Source: https://outsystems.grafana.net/alerting/grafana/efa56x9bn7pj4f/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Defa56x9bn7pj4f&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-us-east-2-04&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dnyuv5xfxoikxajk0zz418&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774291130000&orgId=1&to=1774294797321 Panel: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774291130000&orgId=1&to=1774294797321&viewPanel=1 | |||
| 03-23 | SEV4 | Manually Created | in_triage | 0.0 | — | — | — | **Firing** Value: Aux_SuccessRate=53.333333333333336, Aux_TotalErrors=7, Aux_TotalRequests=15, IsFiring=1, SuccessRate=53.333333333333336, TotalErrors=7, TotalRequests=15 Labels: - alertname = GA - AI Models - Critical severity errors (Ring +3 EU) - alertingtool = pagerduty - grafana_folder = Consoles - sendResolve = false - severity = warning - team = lowcode ai Annotations: - AlertTitle = GA - AI Models - Critical severity errors (Ring +3 EU) - Dashboard = https://outsystems.grafana.net/d/XQQcRa-Bk/ai-models-console-errors?orgId=1&from=now-5m&to=now - SuccessRate = 53.333333333333336% - TotalErrors = 7 - TotalRequests = 15 - description = [Critical] Success rate less than 99.9 percent for the last 10 minutes (GA - Ring +3 EU) - runbook = https://outsystemsrd.atlassian.net/wiki/spaces/RKB/pages/4990664757/Runbook+AI+models+console+-+Critical+severity+errors - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RKB/pages/4990664757/Runbook+AI+models+console+-+Critical+severity+errors - summary = [Critical] Success rate less than 99.9 percent for the last 10 minutes (GA - Ring +3 EU) Source: https://outsystems.grafana.net/alerting/grafana/feja1ab8bu1vke/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dfeja1ab8bu1vke&matcher=alertingtool%3Dpagerduty&matcher=sendResolve%3Dfalse&matcher=severity%3Dwarning&matcher=team%3Dlowcode+ai&orgId=1 | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=0, highNarrowQuery=1, highWide=0, highWideQuery=1, lowNarrow=1, lowNarrowQuery=1, lowWide=1, lowWideQuery=1 Labels: - alertname = KEDA processing Latency - Error Budget Burn Rate is High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - alerttype = ice-keda-SLO-latency - cluster = runp-ga-eu-ce-1-02 - du = catalog-stack - grafana_folder = ICE - grafana_slo_severity = warning - grafana_slo_uuid = zro0c5ht0hh2jbc0y6yg0 - notificationtool = pagerduty - service = keda - service_name = keda - severity = warning - team = ICE - team_name = ICE Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-zro0c5ht0hh2jbc0y6yg0?orgId=1&var-cluster=runp-ga-eu-ce-1-02 - name = SLO Burn Rate High - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDCCPC/pages/5564662104/ICE+KEDA+Latency+runbook - slo_name = KEDA processing Latency - summary = KEDA scaling latency SLO Burn Rate High Source: https://outsystems.grafana.net/alerting/grafana/cf7ih3o8xcwe9c/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcf7ih3o8xcwe9c&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dice-keda-SLO-latency&matcher=cluster%3Drunp-ga-eu-ce-1-02&matcher=du%3Dcatalog-stack&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dzro0c5ht0hh2jbc0y6yg0&matcher=notificationtool%3Dpagerduty&matcher=service%3Dkeda&matcher=service_name%3Dkeda&matcher=severity%3Dwarning&matcher=team%3DICE&matcher=team_name%3DICE&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-zro0c5ht0hh2jbc0y6yg0?from=1774261580000&orgId=1&to=1774291061069 Panel: https://outsystems.grafana.net/d/grafana_slo_app-zro0c5ht0hh2jbc0y6yg0?from=1774261580000&orgId=1&to=1774291061069&viewPanel=1 | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=1, highWide=1, highWideQuery=1, lowNarrow=1, lowNarrowQuery=1, lowWide=1, lowWideQuery=1 Labels: - alertname = [SSC] Runtime Operator Availability - Error Budget Burn Rate is Very High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - alerttype = ssc-slos - cluster = rundev-osall-eu-west-1-05 - du = runtime-operator - environment = osall - grafana_folder = Runtime Operator - grafana_slo_severity = warning - grafana_slo_uuid = i9w7arlgsh3h8uqdzve1u - notificationtool = pagerduty - service = runtime-operator - service_name = runtime-operator - severity = warning - team = ssc - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-i9w7arlgsh3h8uqdzve1u?orgId=1&var-environment=osall&var-cluster=rundev-osall-eu-west-1-05 - name = SLO Burn Rate Very High - Runtime Operator Availability - runbook_url = - slo_name = [SSC] Runtime Operator Availability - summary = Runtime Operator Availability SLO Burn Rate Very High Source: https://outsystems.grafana.net/alerting/grafana/efad206azl0cgb/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Defad206azl0cgb&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dssc-slos&matcher=cluster%3Drundev-osall-eu-west-1-05&matcher=du%3Druntime-operator&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Di9w7arlgsh3h8uqdzve1u&matcher=notificationtool%3Dpagerduty&matcher=service%3Druntime-operator&matcher=service_name%3Druntime-operator&matcher=severity%3Dwarning&matcher=team%3Dssc&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-i9w7arlgsh3h8uqdzve1u?from=1774286430000&orgId=1&to=1774290096344 Panel: https://outsystems.grafana.net/d/grafana_slo_app-i9w7arlgsh3h8uqdzve1u?from=1774286430000&orgId=1&to=1774290096344&viewPanel=1 | |||
| 03-23 | SEV3 | Manually Created | in_triage | 0.0 | — | — | — | h4. ISSUE DESCRIPTION AND HOW TO REPRODUCE - A paragraph detailing the issue or question. - When the customer filters the *dev* traces by a *specific* app, they received a "This app doesn't exist" error message. - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/I5jaihM1dcL9eHwhjP9fPRbhx/?name=image.png] - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/dT86Tirec6K2TEfFqcMRDrWln/?name=image.png] - If available, the steps that must be taken to reproduce the issue under discussion. - Go to traces - Dev traces - Try to filter the traces by an app h4. h4. IMPACT ON THE CUSTOMER Brief description of the impact on the customer/development team/other, including: - Stage where the problem is happening (Development / QA / Production); - ODC Portal - Frequency of the problem; - Ongoing - Business impact or/and development impact; - In a business context, this could slow the development of all the apps in the tenant since the developers will not be able to easily find the traces from the app that they are developing h4. TROUBLESHOOTING STEPS & WORKAROUND - Replicating the issue, we found that this is happening with all the apps when filtering by asset. - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/I5jaihM1dcL9eHwhjP9fPRbhx/?name=image.png] - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/dT86Tirec6K2TEfFqcMRDrWln/?name=image.png] - This error seems to be happening only with the dev traces, since it does not happen with the prod traces. - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/5OxT3Z4r8rpk5CP0dxBlSnwai/?name=image.png] - This issue is similar to the one reported on RDINC-75447; the difference is that now the traces are affected and not the logs. At this moment, the issue is in the EA ring since we are unable to replicate it in our sandbox, which is in the GA ring. If this reaches out GA ring could be a potential wide incident. - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/OEb5YgF1u4yDmPaOXKmlCAumI/?name=image.png] h4. h4. TECH DATA OR ANY OTHER RELEVANT INFO - *Tenant ID* (mandatory): 146340fb-b191-4db0-ae94-64fd3a04ed0e - *Stage ID* (mandatory): ODC Portal Live 02 / cf242ec4-02ac-4a33-b54c-cb5005cad42c - *Application Key* (mandatory if appl.): N/A - *Timeline with start and end date/hour* (mandatory): Reported 3/23/2026 and still ongoing - *OutSystems revisions of the components involved (this includes for example revision of ODC Studio or Forge Supported Plugins)* (mandatory if appl.): N/A - *Diagnostics report* (mandatory for ODC Studio-related issues): N/A - *Grafana dashboards* (adjusted to timeline/tenant/environment/service): N/A {{[! do not remove this line, this will be used to the trigger Technical Support::Send to R&D - ODC #trigger_send_to_r&d_odc !]}} ~* Please see Zendesk Support tab for further comments and attachments.~ | |||
| 03-23 | SEV4 | Manually Created | in_triage | 0.0 | — | — | — | **Firing** Value: Exception=2, Trigger=2 Labels: - alertname = TS - Built-in Domain Error - SendResolve = false - alertingtool = pagerduty - component = plat - detected_level = error - exporter = OTLP - grafana_folder = PaaS - instance = 7640617d-ab3a-4906-9bde-e52cc5390cf4 - job = OutSystems.Tenant.Service/Tenant.Service - k8s_namespace_name = platform-services - level = ERROR - outsystems_otel_access_type = 2 - outsystems_otel_access_visibility = 1 - ring = dev - service_name = Tenant.Service - severity = warning - stamp = plat-dev-us-east-1-01 - team = ces Annotations: - description = This alert triggers when Tenant Storage (TS) fails to complete a Built-In Domain operation. - summary = Unexpected failure when configuring a Built-in Domain. Stamp: plat-dev-us-east-1-01 Source: https://outsystems.grafana.net/alerting/grafana/cfbbu80t78t1cc/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcfbbu80t78t1cc&matcher=SendResolve%3Dfalse&matcher=alertingtool%3Dpagerduty&matcher=component%3Dplat&matcher=detected_level%3Derror&matcher=exporter%3DOTLP&matcher=instance%3D7640617d-ab3a-4906-9bde-e52cc5390cf4&matcher=job%3DOutSystems.Tenant.Service%2FTenant.Service&matcher=k8s_namespace_name%3Dplatform-services&matcher=level%3DERROR&matcher=outsystems_otel_access_type%3D2&matcher=outsystems_otel_access_visibility%3D1&matcher=ring%3Ddev&matcher=service_name%3DTenant.Service&matcher=severity%3Dwarning&matcher=stamp%3Dplat-dev-us-east-1-01&matcher=team%3Dces&orgId=1 | |||
| 03-23 | SEV4 | Manually Created | in_triage | 0.0 | — | — | — | h4. R&D ESCALATION FORM Section comments can be removed for R&D easier interpretation. h4. h4. ISSUE DESCRIPTION AND HOW TO REPRODUCE h2. The customer is requesting us to delete "CoolAIRegistryPolicyAnalyser" from forge h4. h4. IMPACT ON THE CUSTOMER - Low - This is a simple request. h4. h4. TROUBLESHOOTING STEPS & WORKAROUND - I tried to see if there was a button or anything for the customer to remove this on their own but i couldn't find anything h4. h4. TECH DATA OR ANY OTHER RELEVANT INFO - *Tenant ID* (mandatory): - *a066d0ff-d380-4c29-9f8a-957e7405ac40* - *Stage ID* (mandatory): - N/A - *Application Key* (mandatory if appl.): - b5bd165e-3041-4435-9b0a-7b443bd56590 - *Timeline with start and end date/hour* (mandatory): - N/A - *OutSystems revisions of the components involved (this includes for example revision of ODC Studio or Forge Supported Plugins)* (mandatory if appl.): - Asset ID: b5bd165e-3041-4435-9b0a-7b443bd56590 - https://www.outsystems.com/forge/component-overview/23785/coolairegistrypolicyanalyser-odc - *Diagnostics report* (mandatory for ODC Studio-related issues): - *Grafana dashboards* (adjusted to timeline/tenant/environment/service): {{[! do not remove this line, this will be used to the trigger Technical Support::Send to R&D - ODC #trigger_send_to_r&d_odc !]}} ~* Please see Zendesk Support tab for further comments and attachments.~ | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=4.5, C=1 Labels: - alertname = NATS Container Restarts - alertingtool = pagerduty - alerttype = ice-nats-restarts - cluster = datap-dev-us-east-1-01 - du = catalog-stack-operator - dynTitle = There is an increase of NATS container restarts in the last 10min on cluster datap-dev-us-east-1-01, namespace nats-dev - grafana_folder = Infrastructure & Cloud Engineering (ICE) - namespace = nats-dev - pod = nats-2 - service = nats - severity = warning - team = ICE Annotations: - description = There is an increase of NATS container restarts in the last 10min on cluster datap-dev-us-east-1-01, namespace nats-dev - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDCCPC/pages/3764290587/NATS+Runbooks - summary = There is an increase of NATS container restarts in the last 10min on cluster datap-dev-us-east-1-01 Source: https://outsystems.grafana.net/alerting/grafana/de8l6dmc8sr28d/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dde8l6dmc8sr28d&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dice-nats-restarts&matcher=cluster%3Ddatap-dev-us-east-1-01&matcher=du%3Dcatalog-stack-operator&matcher=dynTitle%3DThere+is+an+increase+of+NATS+container+restarts+in+the+last+10min+on+cluster+datap-dev-us-east-1-01%2C+namespace+nats-dev&matcher=namespace%3Dnats-dev&matcher=pod%3Dnats-2&matcher=service%3Dnats&matcher=severity%3Dwarning&matcher=team%3DICE&orgId=1 Dashboard: https://outsystems.grafana.net/d/cdmhfj8mu1mv4d?from=1774281700000&orgId=1&to=1774285365974 Panel: https://outsystems.grafana.net/d/cdmhfj8mu1mv4d?from=1774281700000&orgId=1&to=1774285365974&viewPanel=40 | |||
| 03-23 | SEV4 | Manually Created | in_triage | 0.0 | — | — | — | **Firing** Value: A=0, C=0, D=1 Labels: - alertname = NATS Statefulset Ready Rule - alertingtool = pagerduty - alerttype = ice-nats-unavailable - cluster = datap-dev-us-east-1-01 - du = catalog-stack-operator - dynTitle = NATS replicas (nats nats-dev) in ring DEV stamp datap-dev-us-east-1-01 has no running replicas over the last 15m. - environment = dev - grafana_folder = Infrastructure & Cloud Engineering (ICE) - namespace = nats-dev - ring = dev - service = nats - severity = warning - statefulset = nats - team = ICE Annotations: - Statefulset status dashboard = https://outsystems.grafana.net/d/rodgz6t/deployment-statefulset-helmrelease-status?orgId=1&from=now-24h&to=now&var-ring=dev&var-stamp=datap-dev-us-east-1-01&var-statefulset=nats - description = NATS replicas (nats nats-dev) in ring DEV stamp datap-dev-us-east-1-01 has no running replicas over the last 15m. - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDCCPC/pages/3764421197/ICE+NATS+JetStream+is+not+current+with+the+meta+leader - summary = NATS replicas in ring DEV stamp datap-dev-us-east-1-01 has no running replicas over the last 15m. Source: https://outsystems.grafana.net/alerting/grafana/cdo2qxbxpfwn4d/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcdo2qxbxpfwn4d&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dice-nats-unavailable&matcher=cluster%3Ddatap-dev-us-east-1-01&matcher=du%3Dcatalog-stack-operator&matcher=dynTitle%3DNATS+replicas+%28nats+nats-dev%29+in+ring+DEV+stamp+datap-dev-us-east-1-01+has+no+running+replicas+over+the+last+15m.&matcher=environment%3Ddev&matcher=namespace%3Dnats-dev&matcher=ring%3Ddev&matcher=service%3Dnats&matcher=severity%3Dwarning&matcher=statefulset%3Dnats&matcher=team%3DICE&orgId=1 Dashboard: https://outsystems.grafana.net/d/cdmhfj8mu1mv4d?from=1774280020000&orgId=1&to=1774283683118 Panel: https://outsystems.grafana.net/d/cdmhfj8mu1mv4d?from=1774280020000&orgId=1&to=1774283683118&viewPanel=13 | |||
| 03-23 | SEV1 | Customer Escalated | started | 0.0 | live | live | — | h4. R&D ESCALATION FORM h4. ISSUE DESCRIPTION AND HOW TO REPRODUCE - External library returning 403 h4. h4. IMPACT ON THE CUSTOMER - Unable to login in the application due to external library error, impacting volunteers registration in Prod app h4. h4. TROUBLESHOOTING STEPS & WORKAROUND - Customer error - [Attachment - https://www.outsystems.com/SupportPortal/DownloadAmazon.aspx?FileName=1774277908000__1774277897163.png&TicketGUID=3bd8a422-3e73-48c9-a3d6-af13fbf14fe0|https://www.outsystems.com/SupportPortal/DownloadAmazon.aspx?FileName=1774277908000__1774277897163.png&TicketGUID=3bd8a422-3e73-48c9-a3d6-af13fbf14fe0] - grafana error - 2026-03-23 12:47:52.978 error BackendRuntime | OS-BERT-SLIB-00000 Portal Voluntário [Erro] REST (Expose) Something went wrong on our side. - *HttpRequestException Response status code does not indicate success: 403 (Forbidden). * at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode() at OutSystems.Application.ExternalLibraries.Services.ExternalLibraryService.CallExecutionEndpointAsync[TRequest,TResponse|String actionKey, TRequest actionInputs, Guid libraryKey, Int32 revision, CancellationToken cancellationToken] at OutSystems.NssCryptoAPI.CssCryptoAPI.MssHashPassword(String inParamPassword, String inParamAlgorithm, String inParamStrength, CancellationToken cancellationToken) at ssPortalVoluntario.RssExternalLibraryCryptoAPI.MssHashPassword(IRequestContext requestContext, String inParamPassword, String inParamAlgorithm, String inParamStrength, CancellationToken cancellationToken) - attributes_exception_message - Something went wrong on our side. - attributes_exception_stacktrace - at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode() at OutSystems.Application.ExternalLibraries.Services.ExternalLibraryService.CallExecutionEndpointAsync[TRequest,TResponse|String actionKey, TRequest actionInputs, Guid libraryKey, Int32 revision, CancellationToken cancellationToken] at OutSystems.NssCryptoAPI.CssCryptoAPI.MssHashPassword(String inParamPassword, String inParamAlgorithm, String inParamStrength, CancellationToken cancellationToken) at ssPortalVoluntario.RssExternalLibraryCryptoAPI.MssHashPassword(IRequestContext requestContext, String inParamPassword, String inParamAlgorithm, String inParamStrength, CancellationToken cancellationToken) On at ssPortalVoluntario.RssExternalLibraryCryptoAPI.MssHashPassword(IRequestContext requestContext, String inParamPassword, String inParamAlgorithm, String inParamStrength, CancellationToken cancellationToken) at ssPortalVoluntario.Actions.ActionHashPassword(IRequestContext requestContext, String inParamPassword, String inParamAlgorithm, String inParamStrength, CancellationToken cancellationToken) at ssPortalVoluntario.Actions.ActionCifrarPassword(IRequestContext requestContext, String inParamPassword, CancellationToken cancellationToken) at ssPortalVoluntario.CsRESTExpose.CsRestAPI.CsRestAPIControllerFlows.FlowRestAPIActionRegistoVoluntario(IRequestContext requestContext, RC_12f2641a9621f39fdbfb9ed8915c110a inParamIn, CancellationToken cancellationToken) at ssPortalVoluntario.CsRESTExpose.CsRestAPI.CsRestAPIController.FlowRestAPIActionRegistoVoluntario(JSONRC_12f2641a9621f39fdbfb9ed8915c110a auxinParamIn, CancellationToken cancellationToken) - error started in Prod today at (Grafana) 8.44 UTC - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/5nbu0ZxX71REpQmkEB2P4Tyvi/?name=image.png] - checking external library operator in DU Global view, we can see that it started to happen 20 minutes after the latest update in EA started to "bake" - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/Hfj1nxPp7Dx6rq3OWGKSzZ7S2/?name=image.png] - this is an issue in EA where the external library is returning 403, in the past we had some wide incidents with OS-BERT-SLIB-00000 + 403 errors - #3180625 https://outsystemsrd.atlassian.net/browse/RDINC-43242 - #3167686 https://outsystemsrd.atlassian.net/browse/RDINC-41288 h4. TECH DATA OR ANY OTHER RELEVANT INFO - *Tenant ID* (mandatory): e9c26c42-7e60-4e0b-8dc1-1656360ae3be - *Stage ID* (mandatory): 54ad3a60-e001-45eb-a559-3703058333d9 - *Application Key* (mandatory if appl.): 4c030bc7-e2dd-4b94-94ab-a16416ea46cf - *Timeline with start and end date/hour* (mandatory): *8.44 March 23* - *OutSystems revisions of the components involved (this includes for example revision of ODC Studio or Forge Supported Plugins)* (mandatory if appl.): - *Diagnostics report* (mandatory for ODC Studio-related issues): - *Grafana dashboards* (adjusted to timeline/tenant/environment/service): {{[! do not remove this line, this will be used to the trigger Technical Support::Send to R&D - ODC #trigger_send_to_r&d_odc !]}} ~* Please see Zendesk Support tab for further comments and attachments.~ | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=0.12578483550615216, highWide=1, highWideQuery=0.04762513301207982, lowNarrow=1, lowNarrowQuery=0.0729292785721618, lowWide=1, lowWideQuery=0.03226208858830182 Labels: - alertname = [SSC] Compositions Secret latency - Error Budget Burn Rate is High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-eu-west-1-02 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = y3o1h0dz6mdr87sap4sbt - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-y3o1h0dz6mdr87sap4sbt?orgId=1&var-environment=osall&var-cluster=rundev-osall-eu-west-1-02 - name = SLO Burn Rate High - runbook_url = - slo_name = [SSC] Compositions Secret latency - summary = Compositions Secret latency SLO Burn Rate High Source: https://outsystems.grafana.net/alerting/grafana/bfa5w1lvzhnghd/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbfa5w1lvzhnghd&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-eu-west-1-02&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dy3o1h0dz6mdr87sap4sbt&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-y3o1h0dz6mdr87sap4sbt?from=1774277960000&orgId=1&to=1774281623146 Panel: https://outsystems.grafana.net/d/grafana_slo_app-y3o1h0dz6mdr87sap4sbt?from=1774277960000&orgId=1&to=1774281623146&viewPanel=1 | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=1, highWide=1, highWideQuery=0.14779161789552264, lowNarrow=1, lowNarrowQuery=0.14337560765125146, lowWide=1, lowWideQuery=0.08128464820358716 Labels: - alertname = [SSC] Compositions EncryptionKey latency - Error Budget Burn Rate is Very High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-eu-west-1-02 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = r4aj5so8y6d1o4uz36xow - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-r4aj5so8y6d1o4uz36xow?orgId=1&var-environment=osall&var-cluster=rundev-osall-eu-west-1-02 - name = SLO Burn Rate Very High - runbook_url = - slo_name = [SSC] Compositions EncryptionKey latency - summary = Compositions EncryptionKey latency SLO Burn Rate Very High Source: https://outsystems.grafana.net/alerting/grafana/bfa59hz0ye1a8d/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbfa59hz0ye1a8d&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-eu-west-1-02&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dr4aj5so8y6d1o4uz36xow&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-r4aj5so8y6d1o4uz36xow?from=1774277920000&orgId=1&to=1774281582451 Panel: https://outsystems.grafana.net/d/grafana_slo_app-r4aj5so8y6d1o4uz36xow?from=1774277920000&orgId=1&to=1774281582451&viewPanel=1 | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=1, highWide=0, highWideQuery=0.12578483550615216, lowNarrow=1, lowNarrowQuery=0.1256029218574256, lowWide=1, lowWideQuery=0.0729292785721618 Labels: - alertname = [SSC] Compositions Secret latency - Error Budget Burn Rate is Very High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-eu-west-1-02 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = y3o1h0dz6mdr87sap4sbt - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-y3o1h0dz6mdr87sap4sbt?orgId=1&var-environment=osall&var-cluster=rundev-osall-eu-west-1-02 - name = SLO Burn Rate Very High - runbook_url = - slo_name = [SSC] Compositions Secret latency - summary = Compositions Secret latency SLO Burn Rate Very High Source: https://outsystems.grafana.net/alerting/grafana/dfa5w1lvzhnggb/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Ddfa5w1lvzhnggb&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-eu-west-1-02&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dy3o1h0dz6mdr87sap4sbt&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-y3o1h0dz6mdr87sap4sbt?from=1774277900000&orgId=1&to=1774281566694 Panel: https://outsystems.grafana.net/d/grafana_slo_app-y3o1h0dz6mdr87sap4sbt?from=1774277900000&orgId=1&to=1774281566694&viewPanel=1 | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=0.7547399732801648, E=1 Labels: - alertname = Upbound provider memory reaching configured limits - alertingtool = pagerduty - alerttype = ice-xp-aws-provider-memory - cluster = id-osall-ap-se-1-01 - container = upbound-aws-cognitoidp - dynTitle = upbound-aws-cognitoidp memory usage has exceeded 75% of the configured limits over the last hour in id-osall-ap-se-1-01 - environment = osall - grafana_folder = [Platform Engineering] SSC - notificationtool = pagerduty - ring = osall - service = cloud-compositions-provisioner - severity = warning - team = ssc Annotations: - description = upbound-aws-cognitoidp memory usage has exceeded 75% of the configured limits over the last hour in id-osall-ap-se-1-01 - summary = upbound-aws-cognitoidp memory usage has exceeded 75% of the configured limits over the last hour in id-osall-ap-se-1-01 Source: https://outsystems.grafana.net/alerting/grafana/beqx7x908kbnkc/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbeqx7x908kbnkc&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dice-xp-aws-provider-memory&matcher=cluster%3Did-osall-ap-se-1-01&matcher=container%3Dupbound-aws-cognitoidp&matcher=dynTitle%3Dupbound-aws-cognitoidp+memory+usage+has+exceeded+75%25+of+the+configured+limits+over+the+last+hour+in++id-osall-ap-se-1-01&matcher=environment%3Dosall&matcher=notificationtool%3Dpagerduty&matcher=ring%3Dosall&matcher=service%3Dcloud-compositions-provisioner&matcher=severity%3Dwarning&matcher=team%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/6581e46e4e5c7ba40a07646395ef7b2a?from=1774275810000&orgId=1&to=1774279474739 Panel: https://outsystems.grafana.net/d/6581e46e4e5c7ba40a07646395ef7b2a?from=1774275810000&orgId=1&to=1774279474739&viewPanel=4 | |||
| 03-23 | SEV3 | Manually Created | in_triage | 0.0 | — | — | — | h4. ISSUE DESCRIPTION AND HOW TO REPRODUCE - *NOTE.* The incident is mitigated, and the customer asked for a detailed RCA - One exposed API from *AX_SignUpVerification_Service* started returning *503 – No Healthy Upstream*. The issue began around *6:30 AM UTC* - Additionally, there were inconsistencies observed in the ODC Portal: - The application was not visible in the dropdown. - The application appeared as deployed on *Jan 01, 1990*. - From *06:34 UTC*, there were *no live pods*, resulting in a full application outage. - From Grafana analysis: - The pod entered a *Pending state at 01:49 UTC*. - It only recovered to *the running state at 10:22 UTC*, after mitigation. h2. - Application logs exist only until *06:31:05 UTC*. Pod-related errors identified: {{Failed to pull image "…": failed to resolve reference … unexpected status from HEAD request … 403 Forbidden}} h4. IMPACT ON THE CUSTOMER - *Stage:* Production - *Frequency:* First observed occurrence in this context (though similar situations were reportedly seen before) - *Impact:* - Full application downtime (no live pods) - API returning *503 – No Healthy Upstream* - Prolonged impact due to a lack of alerting - Manual intervention was required for recovery h4. h4. TROUBLESHOOTING STEPS & WORKAROUND - Pod entered the *Pending state* and was unable to recover automatically. - Error logs indicate *failure to pull container image from ECR (403 Forbidden)*. ( grafana), h2. - No clear root cause identified The workaround consisted of asking the customer to redeploy the app from development to production - *We are seeking R&D to help us respond to customers' questions in an RCA context* ## 1. What was the reason for the failure. ## 2. Why did we have to deploy it to Prod. ## # What measures are taken by Outsystems so that this will not occur again (since we have seen this in the past too). - Regarding question 1, we seek the details of what has caused the issue - Regarding question 2, the customer is trying to understand if there wasn't any other way to mitigate rather than deploying - Regarding question 3, we should evaluate if it makes sense to have improvements in the platform that allow us to minimize the impacts of these events. - Alerts - Self-service mitigation in the ODC portal by directly deploying the app in production - Consider automatic redeployment of the app/container in production h4. TECH DATA OR ANY OTHER RELEVANT INFO - *Tenant ID* (mandatory): 12ad9b9e-6b02-4a7e-8a5b-b8a62f9f4130 - *Stage ID* (mandatory): Production - *Application Key* (mandatory if appl.): 1e40c060-2202-4c66-8d46-4fcec5228974 - *Timeline with start and end date/hour* (mandatory): 06:31:05.137 / 10:19:18.207 - *OutSystems revisions of the components involved (this includes for example revision of ODC Studio or Forge Supported Plugins)* (mandatory if appl.): - *Diagnostics report* (mandatory for ODC Studio-related issues): - *Grafana dashboards* (adjusted to timeline/tenant/environment/service): [General] Troubleshooting grafanacloud-outsystems-logs query {{[! do not remove this line, this will be used to the trigger Technical Support::Send to R&D - ODC #trigger_send_to_r&d_odc !]}} ~* Please see Zendesk Support tab for further comments and attachments.~ | |||
| 03-23 | SEV2 | Customer Escalated | started | 0.0 | live | live | — | h4. ISSUE DESCRIPTION AND HOW TO REPRODUCE - The customer is entirely blocked from publishing the "Procurement Demo" application. During the 1-Click Publish, the deployment crashes with an {{OS-DPL-50204}} / {{OS-RDBE-GEN-50001}} error. - Crucially, the application has mysteriously disappeared from the Development stage view in the ODC Portal, even though it is still present in the Test stage. The customer has a tight deadline to present this demo app to their client, and development is completely halted. - *Steps to reproduce:* - Open the "Procurement Demo" app in ODC Studio (connected to the Dev environment). - Attempt a 1-Click Publish. - The publish fails during the database migration phase with a SQL foreign key constraint error. h4. h4. IMPACT ON THE CUSTOMER Brief description of the impact on the customer/development team/other, including: - Stage where the problem is happening (Development / QA / Production); - Frequency of the problem; - Business impact or/and development impact; h4. h4. TROUBLESHOOTING STEPS & WORKAROUND - The app is completely missing from the Development environment in the ODC Portal. I checked the customer's Audit Logs, and there is *no* {{DeleteAsset}} *operation* recorded. - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/339N45trV3Soqetg21rlE5Baf/?name=image.png] - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/Bh9HDuVU3BVcQJlcAtra4khs2/?name=image.png] - To verify expected behavior, I tested deleting an app from Dev in my own sandbox, which properly generated a {{DeleteAsset}} log. This strongly implies the app disappeared from the Dev UI due to a system/metadata bug, not a user deletion. (Maestro logs show the last successful publish in Dev was on the 19th). h2. - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/JZERLDZmVBGxuXbMJNLvFPzaa/?name=image.png] - [Attachment - image.png|https://supportoutsystems.zendesk.com/attachments/token/ZVsctHf1wSplAGIGAlLlcvxPb/?name=image.png] - With the customer's permission, we attempted to bypass the local Studio corruption by taking the revision currently in the Test stage and publishing it back to Dev. This also failed with the exact same database constraint error. There is currently no workaround to bypass the DB lock or restore the application's visibility in the Dev portal. h4. h4. TECH DATA OR ANY OTHER RELEVANT INFO - Ring: ga - Tenant Id: aab79c91-f7f7-46ee-a21a-e33a62474a8b - Tenant Prefix: procurement - Region: il-central-1 - AMW.FQP.DAK.PXG.AOZ.TLQ.04O.B0L - *Diagnostics report* (mandatory for ODC Studio-related issues): - *Grafana dashboards* (adjusted to timeline/tenant/environment/service): - https://outsystems.grafana.net/d/o2_h2-Bnz/publish-service-1cp-by-tenant?orgId=1&var-ring=ga&var-tenant=aab79c91-f7f7-46ee-a21a-e33a62474a8b&var-interval=$__auto&from=2026-03-19T00:00:00.000Z&to=2026-03-23T23:59:59.000Z&timezone=browser&var-region=il-central-1&var-stamp=plat-ga-il-ce-1-01&var-exclude_tenant=$__all&viewPanel=panel-69 - https://outsystems.grafana.net/d/A1YJOsa7z/1cp-logs?orgId=1&var-ring=ga&var-level=Error&var-level=Warning&to=now&var-service=$__all&var-traceId=47e0fc9a4af996f774f6c84895ef7384&from=2026-03-23T08:20:03.501Z&timezone=browser&var-searchText= {{[! do not remove this line, this will be used to the trigger Technical Support::Send to R&D - ODC #trigger_send_to_r&d_odc !]}} h3. Attachments [diagnostic report.txt|https://supportoutsystems.zendesk.com/attachments/token/2OOD8XsEoFqy5uqstwXTh24dP/?name=diagnostic+report.txt] [DiagnosticReportProcurementDemo.txt|https://supportoutsystems.zendesk.com/attachments/token/ycXLRxaJsvjx79wUhEUWhLA8d/?name=DiagnosticReportProcurementDemo.txt] ~* Please see Zendesk Support tab for further comments and attachments.~ | |||
| 03-23 | SEV4 | Manually Created | started | 0.0 | live | live | — | **Firing** Value: [no value] Labels: - alertname = [SSC] Compositions CSA latency - Error Budget Burn Rate is High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-ap-se-1-01 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = nyuv5xfxoikxajk0zz418 - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?orgId=1&var-environment=osall&var-cluster=rundev-osall-ap-se-1-01 - name = SLO Burn Rate High - runbook_url = - slo_name = [SSC] Compositions CSA latency - summary = Compositions CSA latency SLO Burn Rate High Source: https://outsystems.grafana.net/alerting/grafana/dfa56x9bn7pj5b/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Ddfa56x9bn7pj5b&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-ap-se-1-01&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dnyuv5xfxoikxajk0zz418&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774247390000&orgId=1&to=1774269416121 Panel: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774247390000&orgId=1&to=1774269416121&viewPanel=1 | |||
| 03-23 | SEV2 | Manually Created | started | 0.0 | live | live | — | **Firing** Value: B=0, C=1 Labels: - alertname = Canaries Pager Duty : ga-eu-2 - o11 Logs - CanaryName = ga-eu-2-o11logs-v1 - Series = SuccessPercent - alertingtool = pagerduty - alerttype = Failed canaries - grafana_folder = Data Stamp - Data Platform infra - notificationtool = pagerduty - severity = high - team = dna Annotations: - Canary Name = ga-eu-2-o11logs-v1 - Dashboard = https://outsystems.grafana.net/d/TXZHSY-Vk/data-platform-canaries?orgId=1 - Environment = ga-eu-2 - summary = The SuccessPercent in ga-eu-2-o11logs-v1 has been zero for the last 30 minutes. Current Value: 0 Source: https://outsystems.grafana.net/alerting/grafana/cexpz5h4ptssgf/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcexpz5h4ptssgf&matcher=CanaryName%3Dga-eu-2-o11logs-v1&matcher=Series%3DSuccessPercent&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3DFailed+canaries&matcher=notificationtool%3Dpagerduty&matcher=severity%3Dhigh&matcher=team%3Ddna&orgId=1 Dashboard: https://outsystems.grafana.net/d/TXZHSY-Vk?from=1774263090000&orgId=1&to=1774266754593 Panel: https://outsystems.grafana.net/d/TXZHSY-Vk?from=1774263090000&orgId=1&to=1774266754593&viewPanel=17 | |||
| 03-23 | SEV4 | Manually Created | started | 0.0 | live | live | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=1, highWide=1, highWideQuery=0.5, lowNarrow=1, lowNarrowQuery=0.5, lowWide=1, lowWideQuery=0.5 Labels: - alertname = [SSC] Compositions CSA latency - Error Budget Burn Rate is Very High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-us-east-2-03 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = nyuv5xfxoikxajk0zz418 - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?orgId=1&var-environment=osall&var-cluster=rundev-osall-us-east-2-03 - name = SLO Burn Rate Very High - runbook_url = - slo_name = [SSC] Compositions CSA latency - summary = Compositions CSA latency SLO Burn Rate Very High Source: https://outsystems.grafana.net/alerting/grafana/efa56x9bn7pj4f/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Defa56x9bn7pj4f&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-us-east-2-03&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dnyuv5xfxoikxajk0zz418&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774260890000&orgId=1&to=1774264557344 Panel: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774260890000&orgId=1&to=1774264557344&viewPanel=1 | |||
| 03-23 | SEV3 | Manually Created | closed | 0.0 | 0.09 | 0.09 | — | **Firing** Value: A=1, ErrorCounts_Last=6 Labels: - alertname = KCM - Create realm failure - Positive RIngs - alertingtool = pagerduty - grafana_folder = Identity Core - ring = [no value] - sendResolve = false - stamp = id-osall-eu-west-1-01 - team = identity-core Annotations: - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RT/pages/2912092242/Summary+of+Identity+Services+and+tools#Troubleshooting - summary = PLEASE CHECK - Failure to create realms in stamp id-osall-eu-west-1-01 Source: https://outsystems.grafana.net/alerting/grafana/bdi7ise9dybk0e/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbdi7ise9dybk0e&matcher=alertingtool%3Dpagerduty&matcher=ring%3D%5Bno+value%5D&matcher=sendResolve%3Dfalse&matcher=stamp%3Did-osall-eu-west-1-01&matcher=team%3Didentity-core&orgId=1 Dashboard: https://outsystems.grafana.net/d/HYrOJ-07z?from=1774259540000&orgId=1&to=1774263204477 Panel: https://outsystems.grafana.net/d/HYrOJ-07z?from=1774259540000&orgId=1&to=1774263204477&viewPanel=61 | |||
| 03-23 | SEV4 | Manually Created | cancelled | 0.0 | — | — | — | **Firing** Value: A=0, B=0, C=1 Labels: - alertname = EJ - Missing job execution (All stamps) - SendResolve = false - alertingtool = pagerduty - environment = ga - grafana_folder = PaaS - ring = ga - severity = warning - stamp = plat-ga-me-ce-1-01 - team = ces Annotations: - Log = 0 - Severity = low - description = Entitlement job execution missing for the past 25h (Should run once per 24h) Check board - https://outsystems.grafana.net/d/cCbyPA-Vz/entitlement-service?orgId=1&var-ring=dev&var-tenant=All&var-containername=outsystems-entitlement-service&var-clustername=entitlement-service_platform-services&var-servicename=Entitlement.Service&from=now-24h&to=now&viewPanel=80 - runbook_url = https://outsystems.grafana.net/d/cCbyPA-Vz/entitlement-service?orgId=1&var-ring=dev&var-tenant=All&var-containername=outsystems-entitlement-service&var-clustername=entitlement-service_platform-services&var-servicename=Entitlement.Service&from=now-24h&to=now&viewPanel=80 - summary = Missing EJ execution - plat-ga-me-ce-1-01 Source: https://outsystems.grafana.net/alerting/grafana/L0_ytUY4z/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DL0_ytUY4z&matcher=SendResolve%3Dfalse&matcher=alertingtool%3Dpagerduty&matcher=environment%3Dga&matcher=ring%3Dga&matcher=severity%3Dwarning&matcher=stamp%3Dplat-ga-me-ce-1-01&matcher=team%3Dces&orgId=1 | |||
| 03-23 | SEV3 | Manually Created | started | 0.0 | live | live | — | {panel:bgColor=#deebff} h4. *CONSIDERATIONS BEFORE PERFORMING THE ESCALATION TO R&D (MANDATORY)* None of the below points should be overlooked. Failure to do so may imply unnecessary effort. ---- Ensure the ticket has been *categorized*, otherwise, the R&D escalation will go unnoticed and not be worked on! *1.* For *Incidents*, the *IMAX* was consulted *beforehand* on which: * No incident models were found for the reported symptoms *OR* * The incident model did not solve the issue *OR* * The next step indicated in the Incident Model is an escalation to R&D. *2.* For *Questions*, the ChatODC on the *Slack channel* didn't return an acceptable answer. *3.* Any other requests that explicitly indicate an R&D escalation is needed. *4*. *Incident impact assessment* was based on the *prioritization scoring matrix*. ---- h4. R&D ESCALATION FORM Section comments can be removed for R&D easier interpretation. h4. h4. ISSUE DESCRIPTION AND HOW TO REPRODUCE * The customer reported unexpected behavior of Mobile UI datetime picker: * Date picker has a visual bug in iOS where when selecting a date from the previous month for the first time it does not jump to the previous month. After we move to the next month this works normally. * Steps to reproduce the issue: Open date picker > scroll to the left for month before until you are able to see a date from the previous month (eg we are in march scroll until january and you will see the 31st of december) > click the date from the previous month and nothing occurs > move to that month > move forward again > click the date again and it should work this time h4. h4. IMPACT ON THE CUSTOMER Brief description of the impact on the customer/development team/other, including: * Stage where the problem is happening (Development / QA / Production); Development * Frequency of the problem; This issue can be consistently replicated. * Business impact or/and development impact; The customer has a go-live event deadline on April 10th. h4. h4. TROUBLESHOOTING STEPS & WORKAROUND - We were able to replicate the issue on SauceLab using a POC project. No workaround is available currently. h4. h4. TECH DATA OR ANY OTHER RELEVANT INFO * *Tenant ID* (mandatory): *b8eb0b5a-523b-4dfa-9e03-5be29868f3e6* * *Stage ID* (mandatory): 03e4faec-a74f-4ab0-ad47-c8a46bf01574 development and fc113157-52f8-48f2-9917-0ab7012538a3 for test. * *Application Key* (mandatory if appl.): 4bd061df-31b5-4799-bd72-971ffe6200d2 * *Timeline with start and end date/hour* (mandatory): The ticket was created on March 19, 2026 at 10:50:32 and it is still happening. * *OutSystems revisions of the components involved (this includes for example revision of ODC Studio or Forge Supported Plugins)* (mandatory if appl.): Mobile UI 1.0.3 and Mobile UI 1.1.1 * *Diagnostics report* (mandatory for ODC Studio-related issues): * *Grafana dashboards* (adjusted to timeline/tenant/environment/service): {{[! do not remove this line, this will be used to the trigger Technical Support::Send to R&D - ODC #trigger_send_to_r&d_odc !]}} h3. Attachments [net.outsystemssandboxes.testdatepicker_V0.1.0_6.ipa|https://supportoutsystems.zendesk.com/attachments/token/sj3FBqZAZSPeBQHfjrZV6ZA0d/?name=net.outsystemssandboxes.testdatepicker_V0.1.0_6.ipa] [^net.outsystemssandboxes.testdatepicker_V0.1.0_6.ipa] [1773931649027_video (3).mp4|https://supportoutsystems.zendesk.com/attachments/token/LMCOZzUUC7Onl59zWMxN3M8JS/?name=1773931649027_video+%283%29.mp4] !1773931649027_video (3).mp4|width=425,alt="1773931649027_video (3).mp4"! * Please see Zendesk Support tab for further comments and attachments. {panel} | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=1, highWide=1, highWideQuery=1, lowNarrow=1, lowNarrowQuery=1, lowWide=1, lowWideQuery=0.6120400299368521 Labels: - alertname = [SSC] Compositions CSA latency - Error Budget Burn Rate is Very High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-ap-se-1-01 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = nyuv5xfxoikxajk0zz418 - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?orgId=1&var-environment=osall&var-cluster=rundev-osall-ap-se-1-01 - name = SLO Burn Rate Very High - runbook_url = - slo_name = [SSC] Compositions CSA latency - summary = Compositions CSA latency SLO Burn Rate Very High Source: https://outsystems.grafana.net/alerting/grafana/efa56x9bn7pj4f/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Defa56x9bn7pj4f&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-ap-se-1-01&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dnyuv5xfxoikxajk0zz418&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774247210000&orgId=1&to=1774250877343 Panel: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774247210000&orgId=1&to=1774250877343&viewPanel=1 | |||
| 03-22 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=0.5, highWide=1, highWideQuery=0.2857142857142857, lowNarrow=1, lowNarrowQuery=0.4, lowWide=0, lowWideQuery=0.0434782608695653 Labels: - alertname = [SSC] Compositions CSA latency - Error Budget Burn Rate is Very High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-ap-se-1-03 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = nyuv5xfxoikxajk0zz418 - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?orgId=1&var-environment=osall&var-cluster=rundev-osall-ap-se-1-03 - name = SLO Burn Rate Very High - runbook_url = - slo_name = [SSC] Compositions CSA latency - summary = Compositions CSA latency SLO Burn Rate Very High Source: https://outsystems.grafana.net/alerting/grafana/efa56x9bn7pj4f/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Defa56x9bn7pj4f&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-ap-se-1-03&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dnyuv5xfxoikxajk0zz418&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774241270000&orgId=1&to=1774244937342 Panel: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774241270000&orgId=1&to=1774244937342&viewPanel=1 | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=0.045454545454545414, highWide=0, highWideQuery=0.021739329305923594, lowNarrow=1, lowNarrowQuery=0.023809523809523836, lowWide=1, lowWideQuery=0.011494112425153191 Labels: - alertname = [SSC] Compositions CSA latency - Error Budget Burn Rate is High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-ap-se-1-03 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = nyuv5xfxoikxajk0zz418 - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?orgId=1&var-environment=osall&var-cluster=rundev-osall-ap-se-1-03 - name = SLO Burn Rate High - runbook_url = - slo_name = [SSC] Compositions CSA latency - summary = Compositions CSA latency SLO Burn Rate High Source: https://outsystems.grafana.net/alerting/grafana/dfa56x9bn7pj5b/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Ddfa56x9bn7pj5b&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-ap-se-1-03&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dnyuv5xfxoikxajk0zz418&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774239890000&orgId=1&to=1774243556024 Panel: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774239890000&orgId=1&to=1774243556024&viewPanel=1 | |||
| 03-22 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=1, highWide=1, highWideQuery=0.16666666666666663, lowNarrow=1, lowNarrowQuery=0.33333333333333337, lowWide=0, lowWideQuery=0.023809523809523836 Labels: - alertname = [SSC] Compositions CSA latency - Error Budget Burn Rate is Very High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - cluster = rundev-osall-ap-se-1-03 - du = cloud-compositions - environment = osall - grafana_folder = [Platform Engineering] SSC - grafana_slo_severity = warning - grafana_slo_uuid = nyuv5xfxoikxajk0zz418 - notificationtool = pagerduty - service = cloud-compositions-provisioner - service_name = cloud-compositions - severity = warning - team = SSC - team_name = ssc Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?orgId=1&var-environment=osall&var-cluster=rundev-osall-ap-se-1-03 - name = SLO Burn Rate Very High - runbook_url = - slo_name = [SSC] Compositions CSA latency - summary = Compositions CSA latency SLO Burn Rate Very High Source: https://outsystems.grafana.net/alerting/grafana/efa56x9bn7pj4f/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Defa56x9bn7pj4f&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=cluster%3Drundev-osall-ap-se-1-03&matcher=du%3Dcloud-compositions&matcher=environment%3Dosall&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dnyuv5xfxoikxajk0zz418&matcher=notificationtool%3Dpagerduty&matcher=service%3Dcloud-compositions-provisioner&matcher=service_name%3Dcloud-compositions&matcher=severity%3Dwarning&matcher=team%3DSSC&matcher=team_name%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774239770000&orgId=1&to=1774243437307 Panel: https://outsystems.grafana.net/d/grafana_slo_app-nyuv5xfxoikxajk0zz418?from=1774239770000&orgId=1&to=1774243437307&viewPanel=1 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=0, B=0, C=1 Labels: - alertname = STAMP PROD - INGESTION API PRODMETRICS/METRIC/AUDIT LOGS Endpoint: Requests over Time - alertingtool = pagerduty - alerttype = NoRequests - api_endpoint = /v1/audit-logs - cluster = datap-ga-sa-east-1-01 - dynTitle = INGESTION API: The number of requests for /v1/audit-logs in [no value] has reduced to zero - grafana_folder = Data Stamp - Data Platform - metaInfo = api_endpoint: /v1/audit-logs, Threshold : 0, Current Value : 0 - notificationtool = pagerduty - severity = High - team = dna - type = stamp Annotations: - Component = Ingestion API - Environment = [no value] - Threshold = 0 - description = This alert checks for request over time in ingestion API for endpoints: Product Metrics, Audit Logs and Metrics over last 15 mins - log = 0 - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDPG/pages/3481929142/Ingestion+API+503+-+No+healthy+upstream - summary = Component : Ingestion API, Environment : [no value] Source: https://outsystems.grafana.net/alerting/grafana/eeoia391q41s0e/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Deeoia391q41s0e&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3DNoRequests&matcher=api_endpoint%3D%2Fv1%2Faudit-logs&matcher=cluster%3Ddatap-ga-sa-east-1-01&matcher=dynTitle%3DINGESTION+API%3A+The+number+of+requests+for+%2Fv1%2Faudit-logs+in+%5Bno+value%5D+has+reduced+to+zero&matcher=metaInfo%3Dapi_endpoint%3A++%2Fv1%2Faudit-logs%2C+Threshold+%3A+0%2C+Current+Value+%3A+0&matcher=notificationtool%3Dpagerduty&matcher=severity%3DHigh&matcher=team%3Ddna&matcher=type%3Dstamp&orgId=1 Dashboard: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774239560000&orgId=1&to=1774243228060 Panel: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774239560000&orgId=1&to=1774243228060&viewPanel=46 | |||
| 03-22 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: alert=1, highNarrow=1, highNarrowQuery=1, highWide=1, highWideQuery=1, lowNarrow=1, lowNarrowQuery=1, lowWide=0, lowWideQuery=0 Labels: - alertname = Velero Backups Success - Error Budget Burn Rate is Very High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - alerttype = ice-velero-SLO-success-rate - cluster = rundev-osall-ap-se-1-02 - du = infra-core-stack - grafana_folder = Infrastructure & Cloud Engineering (ICE)/SLOs - grafana_slo_severity = warning - grafana_slo_uuid = olrkr5rvf4zm8ze3ov4as - notificationtool = pagerduty - service = velero - service_name = velero - severity = warning - team = ICE - team_name = ICE Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-olrkr5rvf4zm8ze3ov4as?orgId=1&var-cluster=rundev-osall-ap-se-1-02 - name = Burn Rate Very High - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDCCPC/pages/5596545238/ICE+Velero+Backups+Success - slo_name = Velero Backups Success - summary = Velero backup success SLO Burn Rate Very High Source: https://outsystems.grafana.net/alerting/grafana/ff7ih3ieqwgzkf/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dff7ih3ieqwgzkf&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dice-velero-SLO-success-rate&matcher=cluster%3Drundev-osall-ap-se-1-02&matcher=du%3Dinfra-core-stack&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dolrkr5rvf4zm8ze3ov4as&matcher=notificationtool%3Dpagerduty&matcher=service%3Dvelero&matcher=service_name%3Dvelero&matcher=severity%3Dwarning&matcher=team%3DICE&matcher=team_name%3DICE&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-olrkr5rvf4zm8ze3ov4as?from=1774239030000&orgId=1&to=1774242698669 Panel: https://outsystems.grafana.net/d/grafana_slo_app-olrkr5rvf4zm8ze3ov4as?from=1774239030000&orgId=1&to=1774242698669&viewPanel=1 | |||
| 03-22 | SEV3 | Manually Created | in_triage | 0.0 | — | — | — | **Firing** Value: A=1, B=1 Labels: - alertname = RDO Cleaner - Orphaned Cluster Detected - alertingtool = pagerduty - grafana_folder = PaaS/RDO - ring = ea - severity = low - stamp = runp-ea-ap-se-1-01 - team = runtime-data-operator Annotations: - summary = RDO Cleaner - Orphaned Cluster Detected ea Source: https://outsystems.grafana.net/alerting/grafana/cffs9uidr124gc/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcffs9uidr124gc&matcher=alertingtool%3Dpagerduty&matcher=ring%3Dea&matcher=severity%3Dlow&matcher=stamp%3Drunp-ea-ap-se-1-01&matcher=team%3Druntime-data-operator&orgId=1 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=0, B=0, C=1 Labels: - alertname = STAMP PROD - INGESTION API PRODMETRICS/METRIC/AUDIT LOGS Endpoint: Requests over Time - alertingtool = pagerduty - alerttype = NoRequests - api_endpoint = /v1/audit-logs - cluster = datap-ga-sa-east-1-01 - dynTitle = INGESTION API: The number of requests for /v1/audit-logs in [no value] has reduced to zero - grafana_folder = Data Stamp - Data Platform - metaInfo = api_endpoint: /v1/audit-logs, Threshold : 0, Current Value : 0 - notificationtool = pagerduty - severity = High - team = dna - type = stamp Annotations: - Component = Ingestion API - Environment = [no value] - Threshold = 0 - description = This alert checks for request over time in ingestion API for endpoints: Product Metrics, Audit Logs and Metrics over last 15 mins - log = 0 - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDPG/pages/3481929142/Ingestion+API+503+-+No+healthy+upstream - summary = Component : Ingestion API, Environment : [no value] Source: https://outsystems.grafana.net/alerting/grafana/eeoia391q41s0e/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Deeoia391q41s0e&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3DNoRequests&matcher=api_endpoint%3D%2Fv1%2Faudit-logs&matcher=cluster%3Ddatap-ga-sa-east-1-01&matcher=dynTitle%3DINGESTION+API%3A+The+number+of+requests+for+%2Fv1%2Faudit-logs+in+%5Bno+value%5D+has+reduced+to+zero&matcher=metaInfo%3Dapi_endpoint%3A++%2Fv1%2Faudit-logs%2C+Threshold+%3A+0%2C+Current+Value+%3A+0&matcher=notificationtool%3Dpagerduty&matcher=severity%3DHigh&matcher=team%3Ddna&matcher=type%3Dstamp&orgId=1 Dashboard: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774236560000&orgId=1&to=1774240228060 Panel: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774236560000&orgId=1&to=1774240228060&viewPanel=46 | |||
| 03-22 | SEV2 | System-wide SLO | cancelled | 0.0 | — | — | Yes | SLO Name: coredns-success-rate SLO Service Name: SRE - QA (sre-qa) Alert Conditions: Average burn rate ≥ 10x and this condition lasts for 30 minutes Ring:ga Region: me-central-1 Stamps: data, identity, ngs, platform, runtime | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=0, B=0, C=1 Labels: - alertname = STAMP PROD - INGESTION API PRODMETRICS/METRIC/AUDIT LOGS Endpoint: Requests over Time - alertingtool = pagerduty - alerttype = NoRequests - api_endpoint = /v1/audit-logs - cluster = datap-ga-sa-east-1-01 - dynTitle = INGESTION API: The number of requests for /v1/audit-logs in [no value] has reduced to zero - grafana_folder = Data Stamp - Data Platform - metaInfo = api_endpoint: /v1/audit-logs, Threshold : 0, Current Value : 0 - notificationtool = pagerduty - severity = High - team = dna - type = stamp Annotations: - Component = Ingestion API - Environment = [no value] - Threshold = 0 - description = This alert checks for request over time in ingestion API for endpoints: Product Metrics, Audit Logs and Metrics over last 15 mins - log = 0 - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDPG/pages/3481929142/Ingestion+API+503+-+No+healthy+upstream - summary = Component : Ingestion API, Environment : [no value] Source: https://outsystems.grafana.net/alerting/grafana/eeoia391q41s0e/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Deeoia391q41s0e&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3DNoRequests&matcher=api_endpoint%3D%2Fv1%2Faudit-logs&matcher=cluster%3Ddatap-ga-sa-east-1-01&matcher=dynTitle%3DINGESTION+API%3A+The+number+of+requests+for+%2Fv1%2Faudit-logs+in+%5Bno+value%5D+has+reduced+to+zero&matcher=metaInfo%3Dapi_endpoint%3A++%2Fv1%2Faudit-logs%2C+Threshold+%3A+0%2C+Current+Value+%3A+0&matcher=notificationtool%3Dpagerduty&matcher=severity%3DHigh&matcher=team%3Ddna&matcher=type%3Dstamp&orgId=1 Dashboard: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774231160000&orgId=1&to=1774234828087 Panel: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774231160000&orgId=1&to=1774234828087&viewPanel=46 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=5, B=5 Labels: - alertname = [ROPS-MAINTENANCE] - Sent Inactive Workflow Event - alertingtool = pagerduty - grafana_folder = Extended Runtime - notificationtool = pagerduty - ring = osall - severity = info - team = ExtendedRuntime Annotations: - description = Environment: osall Tenant: [no value] Workflow Key: [no value] Current Revision: [no value] Active Revision: [no value] - summary = [ROPS-MAINTENANCE] Sent Inactive Workflow Event in osall: [no value] Source: https://outsystems.grafana.net/alerting/grafana/bfa9irh3l44cgd/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbfa9irh3l44cgd&matcher=alertingtool%3Dpagerduty&matcher=notificationtool%3Dpagerduty&matcher=ring%3Dosall&matcher=severity%3Dinfo&matcher=team%3DExtendedRuntime&orgId=1 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=0, B=0, C=1 Labels: - alertname = STAMP PROD - INGESTION API PRODMETRICS/METRIC/AUDIT LOGS Endpoint: Requests over Time - alertingtool = pagerduty - alerttype = NoRequests - api_endpoint = /v1/audit-logs - cluster = datap-ga-sa-east-1-01 - dynTitle = INGESTION API: The number of requests for /v1/audit-logs in [no value] has reduced to zero - grafana_folder = Data Stamp - Data Platform - metaInfo = api_endpoint: /v1/audit-logs, Threshold : 0, Current Value : 0 - notificationtool = pagerduty - severity = High - team = dna - type = stamp Annotations: - Component = Ingestion API - Environment = [no value] - Threshold = 0 - description = This alert checks for request over time in ingestion API for endpoints: Product Metrics, Audit Logs and Metrics over last 15 mins - log = 0 - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDPG/pages/3481929142/Ingestion+API+503+-+No+healthy+upstream - summary = Component : Ingestion API, Environment : [no value] Source: https://outsystems.grafana.net/alerting/grafana/eeoia391q41s0e/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Deeoia391q41s0e&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3DNoRequests&matcher=api_endpoint%3D%2Fv1%2Faudit-logs&matcher=cluster%3Ddatap-ga-sa-east-1-01&matcher=dynTitle%3DINGESTION+API%3A+The+number+of+requests+for+%2Fv1%2Faudit-logs+in+%5Bno+value%5D+has+reduced+to+zero&matcher=metaInfo%3Dapi_endpoint%3A++%2Fv1%2Faudit-logs%2C+Threshold+%3A+0%2C+Current+Value+%3A+0&matcher=notificationtool%3Dpagerduty&matcher=severity%3DHigh&matcher=team%3Ddna&matcher=type%3Dstamp&orgId=1 Dashboard: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774228160000&orgId=1&to=1774231828070 Panel: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774228160000&orgId=1&to=1774231828070&viewPanel=46 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=7, B=7 Labels: - alertname = [ROPS-MAINTENANCE] - Sent Inactive Workflow Event - alertingtool = pagerduty - grafana_folder = Extended Runtime - notificationtool = pagerduty - ring = osall - severity = info - team = ExtendedRuntime Annotations: - description = Environment: osall Tenant: [no value] Workflow Key: [no value] Current Revision: [no value] Active Revision: [no value] - summary = [ROPS-MAINTENANCE] Sent Inactive Workflow Event in osall: [no value] Source: https://outsystems.grafana.net/alerting/grafana/bfa9irh3l44cgd/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbfa9irh3l44cgd&matcher=alertingtool%3Dpagerduty&matcher=notificationtool%3Dpagerduty&matcher=ring%3Dosall&matcher=severity%3Dinfo&matcher=team%3DExtendedRuntime&orgId=1 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=1, B=1 Labels: - alertname = [ROPS-MAINTENANCE] - Found Inactive Workflow - alertingtool = pagerduty - grafana_folder = Extended Runtime - notificationtool = pagerduty - ring = osall - severity = info - team = ExtendedRuntime Annotations: - description = Environment: osall Tenant: [no value] Workflow Key: [no value] Current Revision: [no value] Active Revision: [no value] - summary = [ROPS-MAINTENANCE] Found inactive Workflow in osall: [no value] Source: https://outsystems.grafana.net/alerting/grafana/bfa5wff8lw0lcd/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbfa5wff8lw0lcd&matcher=alertingtool%3Dpagerduty&matcher=notificationtool%3Dpagerduty&matcher=ring%3Dosall&matcher=severity%3Dinfo&matcher=team%3DExtendedRuntime&orgId=1 | |||
| 03-22 | SEV3 | Manually Created | in_triage | 0.0 | — | — | — | **Firing** Value: A=-1, B=-1 Labels: - alertname = RDO Cleaner - Orphaned Cluster Detected - alertingtool = pagerduty - grafana_folder = PaaS/RDO - ring = dev - severity = low - stamp = rundev-dev-us-east-1-01 - team = runtime-data-operator Annotations: - datasource_uid = grafanacloud-outsystemstest-logs - grafana_state_reason = NoData, KeepLast - ref_id = A - summary = RDO Cleaner - Orphaned Cluster Detected dev Source: https://outsystems.grafana.net/alerting/grafana/cffs9uidr124gc/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcffs9uidr124gc&matcher=alertingtool%3Dpagerduty&matcher=ring%3Ddev&matcher=severity%3Dlow&matcher=stamp%3Drundev-dev-us-east-1-01&matcher=team%3Druntime-data-operator&orgId=1 | |||
| 03-22 | SEV3 | Manually Created | in_triage | 0.0 | — | — | — | **Firing** Value: A=-1, B=-1 Labels: - alertname = RDO Cleaner - Orphaned Cluster Detected - alertingtool = pagerduty - grafana_folder = PaaS/RDO - ring = ga - severity = low - stamp = rundev-ga-ap-ne-1-01 - team = runtime-data-operator Annotations: - datasource_uid = grafanacloud-outsystemstest-logs - grafana_state_reason = NoData, KeepLast - ref_id = A - summary = RDO Cleaner - Orphaned Cluster Detected ga Source: https://outsystems.grafana.net/alerting/grafana/cffs9uidr124gc/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcffs9uidr124gc&matcher=alertingtool%3Dpagerduty&matcher=ring%3Dga&matcher=severity%3Dlow&matcher=stamp%3Drundev-ga-ap-ne-1-01&matcher=team%3Druntime-data-operator&orgId=1 | |||
| 03-22 | SEV4 | Manually Created | in_triage | 0.0 | — | — | — | **Firing** Value: alert=1, highNarrow=0, highNarrowQuery=1, highWide=0, highWideQuery=1, lowNarrow=1, lowNarrowQuery=1, lowWide=1, lowWideQuery=1 Labels: - alertname = KEDA processing Latency - Error Budget Burn Rate is High - __grafana_origin = plugin/grafana-slo-app - alertingtool = pagerduty - alerttype = ice-keda-SLO-latency - cluster = datap-ga-ap-se-1-01 - du = catalog-stack - grafana_folder = ICE - grafana_slo_severity = warning - grafana_slo_uuid = zro0c5ht0hh2jbc0y6yg0 - notificationtool = pagerduty - service = keda - service_name = keda - severity = warning - team = ICE - team_name = ICE Annotations: - description = Error budget is burning too fast. - grafana_slo_dashboard_url = https://outsystems.grafana.net/d/grafana_slo_app-zro0c5ht0hh2jbc0y6yg0?orgId=1&var-cluster=datap-ga-ap-se-1-01 - name = SLO Burn Rate High - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDCCPC/pages/5564662104/ICE+KEDA+Latency+runbook - slo_name = KEDA processing Latency - summary = KEDA scaling latency SLO Burn Rate High Source: https://outsystems.grafana.net/alerting/grafana/cf7ih3o8xcwe9c/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcf7ih3o8xcwe9c&matcher=__grafana_origin%3Dplugin%2Fgrafana-slo-app&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dice-keda-SLO-latency&matcher=cluster%3Ddatap-ga-ap-se-1-01&matcher=du%3Dcatalog-stack&matcher=grafana_slo_severity%3Dwarning&matcher=grafana_slo_uuid%3Dzro0c5ht0hh2jbc0y6yg0&matcher=notificationtool%3Dpagerduty&matcher=service%3Dkeda&matcher=service_name%3Dkeda&matcher=severity%3Dwarning&matcher=team%3DICE&matcher=team_name%3DICE&orgId=1 Dashboard: https://outsystems.grafana.net/d/grafana_slo_app-zro0c5ht0hh2jbc0y6yg0?from=1774034360000&orgId=1&to=1774230160502 Panel: https://outsystems.grafana.net/d/grafana_slo_app-zro0c5ht0hh2jbc0y6yg0?from=1774034360000&orgId=1&to=1774230160502&viewPanel=1 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=2, B=2 Labels: - alertname = [ROPS-MAINTENANCE] - Sent Inactive Workflow Event - alertingtool = pagerduty - grafana_folder = Extended Runtime - notificationtool = pagerduty - ring = osall - severity = info - team = ExtendedRuntime Annotations: - description = Environment: osall Tenant: [no value] Workflow Key: [no value] Current Revision: [no value] Active Revision: [no value] - summary = [ROPS-MAINTENANCE] Sent Inactive Workflow Event in osall: [no value] Source: https://outsystems.grafana.net/alerting/grafana/bfa9irh3l44cgd/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbfa9irh3l44cgd&matcher=alertingtool%3Dpagerduty&matcher=notificationtool%3Dpagerduty&matcher=ring%3Dosall&matcher=severity%3Dinfo&matcher=team%3DExtendedRuntime&orgId=1 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=1, B=1 Labels: - alertname = [ROPS-MAINTENANCE] - Found Inactive Workflow - alertingtool = pagerduty - grafana_folder = Extended Runtime - notificationtool = pagerduty - ring = osall - severity = info - team = ExtendedRuntime Annotations: - description = Environment: osall Tenant: [no value] Workflow Key: [no value] Current Revision: [no value] Active Revision: [no value] - summary = [ROPS-MAINTENANCE] Found inactive Workflow in osall: [no value] Source: https://outsystems.grafana.net/alerting/grafana/bfa5wff8lw0lcd/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbfa5wff8lw0lcd&matcher=alertingtool%3Dpagerduty&matcher=notificationtool%3Dpagerduty&matcher=ring%3Dosall&matcher=severity%3Dinfo&matcher=team%3DExtendedRuntime&orgId=1 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=0, B=0, C=1 Labels: - alertname = STAMP PROD - INGESTION API PRODMETRICS/METRIC/AUDIT LOGS Endpoint: Requests over Time - alertingtool = pagerduty - alerttype = NoRequests - api_endpoint = /v1/audit-logs - cluster = datap-ga-sa-east-1-01 - dynTitle = INGESTION API: The number of requests for /v1/audit-logs in [no value] has reduced to zero - grafana_folder = Data Stamp - Data Platform - metaInfo = api_endpoint: /v1/audit-logs, Threshold : 0, Current Value : 0 - notificationtool = pagerduty - severity = High - team = dna - type = stamp Annotations: - Component = Ingestion API - Environment = [no value] - Threshold = 0 - description = This alert checks for request over time in ingestion API for endpoints: Product Metrics, Audit Logs and Metrics over last 15 mins - log = 0 - runbook_url = https://outsystemsrd.atlassian.net/wiki/spaces/RDPG/pages/3481929142/Ingestion+API+503+-+No+healthy+upstream - summary = Component : Ingestion API, Environment : [no value] Source: https://outsystems.grafana.net/alerting/grafana/eeoia391q41s0e/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Deeoia391q41s0e&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3DNoRequests&matcher=api_endpoint%3D%2Fv1%2Faudit-logs&matcher=cluster%3Ddatap-ga-sa-east-1-01&matcher=dynTitle%3DINGESTION+API%3A+The+number+of+requests+for+%2Fv1%2Faudit-logs+in+%5Bno+value%5D+has+reduced+to+zero&matcher=metaInfo%3Dapi_endpoint%3A++%2Fv1%2Faudit-logs%2C+Threshold+%3A+0%2C+Current+Value+%3A+0&matcher=notificationtool%3Dpagerduty&matcher=severity%3DHigh&matcher=team%3Ddna&matcher=type%3Dstamp&orgId=1 Dashboard: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774224860000&orgId=1&to=1774228528104 Panel: https://outsystems.grafana.net/d/e8f3bd79-b58d-4c98-9212-258fb3238372?from=1774224860000&orgId=1&to=1774228528104&viewPanel=46 | |||
| 03-22 | SEV3 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=6, B=6 Labels: - alertname = [ROPS-MAINTENANCE] - Sent Inactive Workflow Event - alertingtool = pagerduty - grafana_folder = Extended Runtime - notificationtool = pagerduty - ring = osall - severity = info - team = ExtendedRuntime Annotations: - description = Environment: osall Tenant: [no value] Workflow Key: [no value] Current Revision: [no value] Active Revision: [no value] - summary = [ROPS-MAINTENANCE] Sent Inactive Workflow Event in osall: [no value] Source: https://outsystems.grafana.net/alerting/grafana/bfa9irh3l44cgd/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbfa9irh3l44cgd&matcher=alertingtool%3Dpagerduty&matcher=notificationtool%3Dpagerduty&matcher=ring%3Dosall&matcher=severity%3Dinfo&matcher=team%3DExtendedRuntime&orgId=1 | |||
| 03-23 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=0.853783925374349, E=1 Labels: - alertname = Upbound provider memory reaching configured limits - alertingtool = pagerduty - alerttype = ice-xp-aws-provider-memory - cluster = id-osall-ap-se-1-01 - container = upbound-aws-cognitoidp - dynTitle = upbound-aws-cognitoidp memory usage has exceeded 75% of the configured limits over the last hour in id-osall-ap-se-1-01 - environment = osall - grafana_folder = [Platform Engineering] SSC - notificationtool = pagerduty - ring = osall - service = cloud-compositions-provisioner - severity = warning - team = ssc Annotations: - description = upbound-aws-cognitoidp memory usage has exceeded 75% of the configured limits over the last hour in id-osall-ap-se-1-01 - summary = upbound-aws-cognitoidp memory usage has exceeded 75% of the configured limits over the last hour in id-osall-ap-se-1-01 Source: https://outsystems.grafana.net/alerting/grafana/beqx7x908kbnkc/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbeqx7x908kbnkc&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dice-xp-aws-provider-memory&matcher=cluster%3Did-osall-ap-se-1-01&matcher=container%3Dupbound-aws-cognitoidp&matcher=dynTitle%3Dupbound-aws-cognitoidp+memory+usage+has+exceeded+75%25+of+the+configured+limits+over+the+last+hour+in++id-osall-ap-se-1-01&matcher=environment%3Dosall&matcher=notificationtool%3Dpagerduty&matcher=ring%3Dosall&matcher=service%3Dcloud-compositions-provisioner&matcher=severity%3Dwarning&matcher=team%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/6581e46e4e5c7ba40a07646395ef7b2a?from=1774223010000&orgId=1&to=1774226675630 Panel: https://outsystems.grafana.net/d/6581e46e4e5c7ba40a07646395ef7b2a?from=1774223010000&orgId=1&to=1774226675630&viewPanel=4 | |||
| 03-22 | SEV4 | Manually Created | resolved | 0.0 | 0.02 | 0.02 | — | **Firing** Value: A=0.8299701690673829, E=1 Labels: - alertname = Upbound provider memory reaching configured limits - alertingtool = pagerduty - alerttype = ice-xp-aws-provider-memory - cluster = id-osall-eu-west-1-01 - container = upbound-aws-cognitoidp - dynTitle = upbound-aws-cognitoidp memory usage has exceeded 75% of the configured limits over the last hour in id-osall-eu-west-1-01 - environment = osall - grafana_folder = [Platform Engineering] SSC - notificationtool = pagerduty - ring = osall - service = cloud-compositions-provisioner - severity = warning - team = ssc Annotations: - description = upbound-aws-cognitoidp memory usage has exceeded 75% of the configured limits over the last hour in id-osall-eu-west-1-01 - summary = upbound-aws-cognitoidp memory usage has exceeded 75% of the configured limits over the last hour in id-osall-eu-west-1-01 Source: https://outsystems.grafana.net/alerting/grafana/beqx7x908kbnkc/view?orgId=1 Silence: https://outsystems.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbeqx7x908kbnkc&matcher=alertingtool%3Dpagerduty&matcher=alerttype%3Dice-xp-aws-provider-memory&matcher=cluster%3Did-osall-eu-west-1-01&matcher=container%3Dupbound-aws-cognitoidp&matcher=dynTitle%3Dupbound-aws-cognitoidp+memory+usage+has+exceeded+75%25+of+the+configured+limits+over+the+last+hour+in++id-osall-eu-west-1-01&matcher=environment%3Dosall&matcher=notificationtool%3Dpagerduty&matcher=ring%3Dosall&matcher=service%3Dcloud-compositions-provisioner&matcher=severity%3Dwarning&matcher=team%3Dssc&orgId=1 Dashboard: https://outsystems.grafana.net/d/6581e46e4e5c7ba40a07646395ef7b2a?from=1774223010000&orgId=1&to=1774226675631 Panel: https://outsystems.grafana.net/d/6581e46e4e5c7ba40a07646395ef7b2a?from=1774223010000&orgId=1&to=1774226675631&viewPanel=4 |